text
stringlengths
8.19k
1.23M
summary
stringlengths
342
12.7k
You are an expert at summarizing long articles. Proceed to summarize the following text: The Postal Service is a corporation-like organization that was created by the government to provide postal services and to help bind the nation through the personal, educational, literary, and business correspondence of the people. Over the years, the government has created a number of corporations or corporation-like organizations to fulfill a variety of public functions or purposes of a predominately business nature. Historically, such organizations have been created on an individual need basis with the characteristics and functions of each being tailored to its specific mission. In general, these organizations can be identified under certain categories, such as wholly-owned government corporations, mixed-ownership government corporations, government-sponsored enterprises (GSE), or government-created private corporations. Grouping these government-created corporations and corporation-like organizations into these categories can be helpful because such organizations share certain common characteristics. However, for comparative purposes, it should be noted that even within these categories, the organizations are structured and governed in a variety of ways. Therefore, for purposes of this report, we found it more helpful to review each organization in our study individually without regard to any particular category under which it may be identified. Appendix I contains additional information about government-created corporations and corporation-like organizations. The Postal Reorganization Act of 1970 (1970 Act) created the Postal Service, designated it as an independent establishment of the executive branch, and created a Board of Governors to be its governing body. The Postal Service is not identified as falling under any particular category of government corporation or government-created corporation-like organization. The Postal Service has reported that it is not a government corporation; however, it is frequently considered by others to be one and has been previously included in major government corporation studies done over the last several years. According to the Postal Service, its Board of Governors is comparable to the board of directors of a private sector corporation. The Board of Governors directs the exercise of the powers of the Postal Service, directs and controls its expenditures, reviews its practices, and conducts long-range planning. It sets policy; participates in establishing postage rates; and takes up matters, such as mail delivery standards and capital investments and facilities projects exceeding $10 million. It also determines the pay of the Postmaster General (PMG) and approves the pay of other Postal Service officers. By statute, the Postal Service is to maintain compensation and benefits for all officers and employees on a standard of comparability with the private sector. However, no officer or employee can receive pay in excess of the rate for level I of the Executive Schedule—currently $148,400. The Board consists of 11 members, including (1) 9 Governors appointed by the president, with the advice and consent of the Senate, to 9 year staggered terms; (2) the PMG, who is appointed by the Governors; and (3) the Deputy Postmaster General (DPMG), who is appointed by the Governors and the PMG. By law, Governors are chosen to represent the public interest and cannot be representatives of special interests. They serve part time and may be removed only for cause. Not more than five of the nine Governors may belong to the same political party. No other qualifications or restrictions are specified in law. The 1970 Act provided for each Governor to receive an annual salary of $10,000, plus $300 a day and travel expenses for not more than 30 days of meetings each year. The act providing appropriations to the Postal Service for fiscal year 1997 increased the Governors’ annual salaries to $30,000 per year, but the $300 daily meeting allowance remained unchanged. To identify current and former members’ areas of concern, including specific issues and their suggested legislative changes, we (1) interviewed all 11 members of the current Board (including the PMG and DPMG); and (2) interviewed 2 former Governors appointed after December 1, 1985, whom we were able to contact. We also interviewed the PMG’s predecessor and the PMG serving at the time of the 1970 Postal Service reorganization. The latter also served as the first Chairman of the Board of Governors. Appendix II lists the interviewees included in our study, position(s) held, and date(s) of appointment. We sent each interviewee a list of questions, judgmentally grouped into broad areas, prior to our interview. This list guided our interviews (see app. III). We asked each interviewee if he or she had any issues or concerns within each of the broad areas. If the interviewees had concerns, we asked them to elaborate and identify any specific legislative action(s) they believed Congress might want to consider. We also offered interviewees an opportunity to discuss any other concerns related to the Board. For each broad area discussed, we tallied the number of interviewees who believed legislative changes were either needed or not needed. If interviewees did not definitely answer yes or no, we did not include their answers in our tallies. To compare the characteristics of the boards of other government-created organizations with those of the Postal Service Board, we developed and sent a matrix to 11 boards, including the Postal Service Board. The matrix covered 73 characteristics, grouped in such broad categories as (1) the board’s mission and responsibilities, (2) the board’s authority, (3) board members’ compensation, and (4) board composition. In developing and refining our matrix and interview questions, we researched and reviewed available information on the structure and characteristics of public and private corporate boards, reviewed prior work we had done on government corporations, and consulted with knowledgeable individuals on Postal Service Board activities. Except for the Postal Service, we judgmentally selected the government-created organizations included in our study in order to have a mix of the various types. We selected two government-sponsored enterprises, two wholly-owned government corporations, two mixed-ownership government corporations, and two federally created private corporations. In making these selections, we used our recent report on government corporations, as well as other prior work we had done, to identify organizations of various types. To provide a broader range of organizations for comparison purposes, we also selected two foreign postal administrations—Canada Post and Australia Post. We selected these organizations primarily because of our previous work in this area. These organizations were described in a recent Price Waterhouse report as among the most “progressive postal administrations.” In this report, we highlight differences between the Postal Service and the other government-created organizations as they apply to the four issues most frequently cited by current and former Board members as needing legislative attention. Some of the other issues raised by the interviewees, however, were outside the scope of our matrix. Therefore, we did not have sufficient information to make comparisons with the other government-created organizations on all of the issues raised by the interviewees. Appendix IV contains selected details from the matrices. We did not verify the boards’ responses to our matrix. However, we did ask each of the boards that completed our matrix to review their respective sections of appendix IV for accuracy. To provide the Subcommittee with additional information on governance issues that might be helpful in its deliberations on postal reform, we reviewed a broad range of available literature affecting both public and private boards. The result of our literature research is included in our discussions of governance issues. We conducted our review at postal headquarters in Washington, D.C., between July 1996 and April 1997 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the PMG and the Chairman of the USPS Board of Governors. Their comments are discussed at the end of this letter and included as appendixes V and VI, respectively. A majority of the current and former members of the Postal Service Board of Governors whom we interviewed believed legislative attention may be warranted in three areas—the Board’s authority, Board members’ compensation, and Board members’ qualifications. Although there was no consensus among the members on the specific issues within each area of concern, several issues were mentioned frequently, and a number of legislative changes were offered for consideration. The most frequently cited issues were (1) the limitations on the Board’s authority to establish postage rates; (2) the inability of the Board to pay the PMG more than the rate for level I of the Executive Schedule—currently $148,400; (3) the Board’s lack of pay comparability with the private sector; and (4) qualification requirements that are too general to ensure that Board appointees possess the kind of experience necessary to oversee a major government business. The issue most frequently cited by current and former Postal Service Board members as needing legislative attention was the limitations on the Board’s authority to establish postage rates. Ten of the 15 interviewees believed that the current ratemaking process adversely affects the Postal Service’s ability to compete with its private sector competitors. They were concerned that the current ratemaking process is too restrictive and therefore limits the Postal Service’s ability to quickly adjust postage rates in a highly competitive and rapidly changing marketplace. Three of the remaining five interviewees did not comment specifically on this issue, and two said the current ratemaking process should not be changed. Under the current ratemaking process, the Postal Service, through its Board of Governors, is to propose changes to the Postal Rate Commission (PRC)—an independent regulatory agency established in the executive branch—and request that it issue a recommended decision. PRC, after holding public hearings and reviewing data provided by the Postal Service, is to provide the Postal Service Board of Governors with its recommended decision concerning proposed rate changes. By law, this process can take up to 10 months. After receiving a recommendation from PRC, the Governors can approve, reject, or allow the recommended rates to take effect under protest; or, under certain circumstances, the Governors can modify a PRC decision. However, before the Governors can modify any PRC-recommended rates, they are required to return the rate case to PRC for reconsideration. After PRC renders a further rate decision, the nine Governors can modify that decision only by a unanimous vote—a task that some members said was almost impossible to achieve because, in their experience, the Governors seldom agree unanimously on any issue. In fact, there has been only one instance—in 1980—where the Governors modified a PRC recommendation for First-Class postage. Interviewees suggesting legislative attention in this area offered a number of changes for consideration. Two suggestions were mentioned most frequently. One suggestion was to use administrative law judges to hear rate cases and make recommendations to the Board—rather than going through PRC. The members believed this change would streamline the ratemaking process and still give due consideration to the views of the mailing community. The other suggestion was that the Board be given the authority to override a PRC recommended rate decision with something less than a unanimous vote. For example, suggestions were made that the unanimous vote requirement be changed to either a majority or a two-thirds majority vote. Other legislative changes offered for consideration included (1) giving the Board authority to raise rates within legislatively established parameters (e.g., allow the Board to raise postage rates annually up to the increase in a designated index, such as the Consumer Price Index); (2) restricting PRC’s ratemaking role to monopoly mail—and a related suggestion allowing the Postal Service to establish private sector-type subsidiary companies that would compete directly with private carriers of nonmonopoly mail; and (3) legislatively requiring that PRC render its rate decisions in much less time than the 10 months currently allowed by law. One interviewee, however, said the law should not be changed to require faster decisions from PRC because, given current complex ratemaking requirements, it is unreasonable to expect faster decisions. The two interviewees who said the current ratemaking process should not be changed agreed that the current ratemaking process negatively affected the Postal Service’s ability to compete with private sector carriers. However, they believed a better way of addressing the ratemaking issue was to create a PRC-type body to regulate private sector carriers’ rates rather than change the ratemaking process within the Postal Service. Our survey of nine other government-created organizations showed some similarity between the ratemaking processes of the Postal Service and the processes reported by two other organizations—Australia Post and Canada Post. No similarities were apparent at the other seven organizations. According to Australia Post, its Board of Directors sets prices for all products and services. The board must notify the Minister for Communications and the Arts of any intention to alter the price of the standard postal rate and the Minister has the opportunity to disallow it. Although it has no direct authority over the price, the Australian Competition and Consumer Council has the opportunity to consider any proposal and make its views known to the Minister as part of his/her consideration of proposed price alterations. According to Canada Post, its Board of Directors oversees virtually all ratemaking decisions. This includes decisions for such products as basic domestic and international single-piece letters, international printed matter, and some registered mail products. Once new postage rate regulations are proposed, interested parties are given a 60-day period during which they can provide written comments on the rate change. For various reasons, the ratemaking process at the Postal Service contrasts sharply with the reported ratemaking processes at Fannie Mae, Freddie Mac, AMTRAK, FDIC, and TVA. Each of these organizations is permitted to set prices in a manner very much like any private sector corporation—i.e., independent of a third-party review or approval. Ratemaking processes at the RTB and the CPB are not comparable to the Postal Service’s ratemaking process. At the RTB, its Board of Directors makes loans at legislatively established rates. At the CPB, there are no products or services sold and, therefore, no ratemaking procedures. Proposed legislation introduced in Congress in January 1997 to reform the Postal Service, H.R. 22, proposes significant changes to the ratemaking process and to the long-standing relationship between the Postal Service and PRC. Current law requires that the Postal Service file a request with PRC for changes in rates for services offered. H.R. 22 would change that requirement. It would divide postal products into two categories, noncompetitive mail and competitive mail. Noncompetitive mail would include those products, such as First-Class Mail, for which there are few alternatives to the Postal Service. For products in the noncompetitive mail category, the Service would establish rates using a price cap based on the Gross Domestic Product Chain-Type Price Index modified by an adjustment factor, which PRC would determine every 5 years. Once the cap was established, the Postal Service would generally be able to adjust prices annually without filing a request for change with PRC. Competitive mail, such as Express Mail, would include those products facing full competition within the marketplace. The Postal Service could price competitive products as it saw fit, without filing a request for change with PRC. However, Postal Service pricing of competitive mail would be subject to the constraints of the antitrust laws as well as requirements that rates cover the Service’s costs and make a reasonable contribution to overhead. PRC would conduct annual audits of the Postal Service to ensure it was acting in compliance with the law with respect to both noncompetitive and competitive products. Adoption of the ratemaking proposals in H.R. 22 would increase the ratemaking similarities between the Postal Service and Canada Post and Australia Post. In testimony before your Subcommittee on the Postal Service on July 10, 1996, the PRC Chairman noted that proposed legislation to reform the Postal Service included several proposals that would increase the Postal Service’s flexibility to price its products. He also noted that under the proposed ratemaking process, provisions for multiple reconsideration and judicial reviews of rate decisions would be eliminated. Generally, the Board has adjusted rates every 3 years or so against a backdrop of an extensive body of public input. Under H.R. 22, the Board could be adjusting many rates as often as annually. The PRC Chairman said that the current system of multiple checks and balances is, in some instances, too much of a good thing. At the same time, however, he cautioned about going too far in the opposite direction. Ratemaking issues were again discussed at a hearing before your Subcommittee on the Postal Service on April 9, 1997. Witnesses included economists who helped formulate and design price cap plans for telecommunications and utility regulatory entities, as well as experts in antitrust laws, telecommunication regulation, postal arbitration, and contracts. Differences in opinion among these witnesses as to how well price caps would work for the Postal Service indicate that the debate over the Postal Service’s pricing system and the roles of the Board of Governors and PRC have not yet been resolved among all interested parties. The issue of ratemaking is a central part of the ongoing congressional deliberations related to the proposed postal reform legislation (H.R. 22). The second most frequently cited issue was the Board’s inability to pay the PMG more than the rate for level I of the Executive Schedule—currently $148,400. Eight of the 15 interviewees said the Board should be given more flexibility to compensate the PMG so that pay could be more comparable with the private sector. One interviewee strongly disagreed that compensation changes were needed, and the other six interviewees had no comment on the issue. The eight interviewees who believed the Board should have more flexibility to compensate the PMG were concerned that because of the pay cap, the Board might have a difficult time filling future PMG vacancies with highly qualified candidates. They were concerned that many highly qualified candidates might not even consider the position of PMG because of more financially lucrative positions in the private sector. These eight interviewees suggested legislative consideration be given to removing the pay cap on the PMG’s pay. As an alternative, one of the eight interviewees said legislative consideration should be given to allowing the Board to award the PMG performance-based bonuses over and above the legislated pay cap. The one interviewee with an opposing view did not believe the Board would have a difficult time attracting highly qualified candidates to the PMG position at the current salary. That interviewee said people are attracted to the position because of its status and the desire to serve the public—not because they are seeking a highly paid position. Our survey of nine other government-created organizations showed that the PMG’s pay is in line with the reported pay received by the top officials in those organizations where pay is legislatively capped. Five of the nine organizations had legislative pay caps similar to the Postal Service’s. Those organizations were TVA, RTB, FDIC, AMTRAK, and the CPB. However, two organizations—Fannie Mae and Freddie Mac—were not subject to legislative pay caps. According to information provided by Fannie Mae and Freddie Mac, the chief executive officers (CEO) at these two organizations were paid substantially more than the PMG. Data provided show that in 1995, the CEOs at Fannie Mae and Freddie Mac were each paid more than $1 million, compared to the $148,400 paid the PMG. Our ability to make pay comparisons with the CEOs at Canada Post and Australia Post was limited because both organizations said they consider this information to be private. Information provided by Canada Post shows its CEO’s pay is set by Canada’s Governor in Council and was in the neighborhood of $200,000 (U.S.) in 1995. Australia Post did not provide specific information on its CEO’s pay but said the pay is set at a level that takes into account both public and private sector considerations. Executive compensation is, has been, and will likely continue to be, a hotly debated issue in both the public and private sectors. Recent literature on executive compensation in the private sector shows the issue to be sharply focused on the amount of compensation paid executives in comparison to the health of the company, returns to investors, and wages paid nonmanagerial employees. For example, Business Week reported in April 1997 that the average pay increase for top executives in U.S. companies last year was 54 percent, compared with an average increase of 3 percent for U.S. factory workers. It also reported that the average CEO in the United States was paid 209 times more than the average U.S. factory worker. According to the literature, the spread in pay between these two groups has continued to widen since the 1980s. As time passes, however, more and more private sector executives are reportedly seeing their compensation challenged by stockholders and employee unions who perceive the pay of some executives to be exorbitant. Other attempts are also being made to bring the issues surrounding executive pay to the forefront. For example, the American Federation of Labor-Congress of Industrial Organizations (AFL-CIO) launched an Internet site in April 1997 to give the public ready access to information on executive compensation for the Fortune 500 companies. Executive pay issues also exist within the public sector. The Senior Executives Association has cited lifting the 3-year freeze on Executive Level pay as one of its top priorities. Over time, the spread in pay between executives and other employees has narrowed. Along with the pay compression issue, Congress and the administration have become increasingly concerned about executive compensation in some government-created organizations and have been taking steps to address some of those concerns. For example, in October 1995, the President signed Executive Order 12976 requiring that certain bonuses paid executives of designated government corporations be preapproved by the Office of Management and Budget. Additionally, since 1992, Fannie Mae and Freddie Mac have been prohibited from providing compensation to any executive officer that is not reasonable and comparable with compensation for employment in other similar organizations (including other publicly held financial institutions or major financial services companies) involving similar duties and responsibilities. Also, a significant portion of potential compensation for executive officers must be based on the performance of the enterprises. Further, Fannie Mae and Freddie Mac are prohibited from entering into any severance agreement or contract with an executive officer, unless the Director of the Office of Federal Housing Enterprise Oversight of the Department of Housing and Urban Development approves the agreement or contract in advance. Although 12 of the 15 interviewees believed Board members’ pay was another area warranting legislative attention, there was substantial disagreement on the specific issues and possible legislative remedies. Some interviewees thought Board members’ pay should be increased, while others thought compensation should not be increased because Board service should be considered public service. Others thought the daily meeting attendance fee should be increased. Others thought periodic reviews of Board members’ pay should be required, and varying combinations of these changes were also offered for consideration. Six of the 15 interviewees said that even though Board members’ salaries were increased from $10,000 to $30,000 in 1996, they were still below private sector salaries and should be made more comparable. Five of the 15 interviewees also believed the law should be changed to increase the $300 daily fee members are paid for attending Board meetings. Interviewees who suggested legislative change were particularly concerned that there is no legal requirement that pay be reviewed on a periodic basis, and they pointed out that the time span between the last two pay increases was 26 years. Seven interviewees, however, said Board members’ salaries should not be increased, and four said daily meeting attendance fees should not be increased. Two interviewees did not comment on Board members’ salaries, and six did not comment on daily meeting attendance fees. Interviewees opposing these suggested legislative changes were generally of the opinion that Board service should be recognized as public service and that Postal Service Governors should not expect compensation similar to that found on private sector boards. Our comparison of Postal Governors’ pay with the reported pay of board members of nine other government-created organizations did not show major disparities. In fact, we identified only two notable differences. First, board members at Fannie Mae and Freddie Mac may elect to receive shares of stock in lieu of cash compensation. Second, board members at TVA and FDIC are paid more than Postal Service Governors, but they serve full time. None of the interviewees believed Postal Board Governors should serve full time. As discussed earlier, compensation is an area of contention in both the public and private sectors. Board members’ compensation, like CEOs’ compensation, is currently being examined from different angles by various interest groups—e.g., stockholders and employee unions. Work done by Spencer Stuart, a company that tracks board trends and practices at 100 major American corporations, showed that private sector board members’ annual salaries and meeting attendance fees averaged $55,300 in 1996 (ranging from $25,000 to $100,000). This compares to compensation of about $38,000 that the Postal Service Governors will likely receive in 1997 (salary plus the historical average of daily meeting attendance fees). The final issue cited by a majority of current and former members as needing legislative attention was the lack of well-defined qualification requirements for Board appointments. Eight of the 15 interviewees stated that the statutes governing Board appointments are too general and should be more precisely defined. Seven of the interviewees, however, said no legislative change should be made in the appointment process. They were generally of the opinion that the current process, which requires Senate confirmation, ensures that highly qualified candidates are appointed to the Board. The eight interviewees who favored more precisely defined qualification requirements believed that, historically, appointments to the Board have not always been based on an individual’s demonstrated ability to govern large corporations like the Postal Service. They were concerned that because qualification requirements are not clearly defined in law, the Board may not always have the most appropriate mix of skills for effectively managing an organization as big and as complex as the Postal Service. The interviewees suggested a number of legislative changes that they believed could enhance the appointment process. These included having an independent body make recommendations for Board appointments and delineating, in law, specific requirements for Board service. Examples of specific requirements mentioned included (1) requiring that appointees have corporate experience, (2) requiring a mix of geographic representation on the Board, and (3) requiring labor and mailing industry representation on the Board. The statutory restrictions/qualifications for board service at six of the other nine government-created organizations included in our study were more specific than the Postal Service’s. For example, at Fannie Mae and Freddie Mac, four of the five presidential appointees to the board must have specific business backgrounds: one must be from the mortgage lending industry, one must be from the home building industry, one must be from the real estate industry, and one must be from an organization representing consumer interests. At the CPB, statutes require that the nine appointed board members be selected from such fields as education, cultural and civic affairs, or the arts. Board members are also to represent various regions of the nation and professions, occupations, and various kinds of talent and experience appropriate to the function and responsibilities of the CPB. Additionally, of the nine board members, one is to be selected from among individuals who represent the licensees and permittees of public television stations, and one is to represent the licensees and permittees of public radio stations. Australia Post and AMTRAK statutes require that at least one board member have an understanding of employee issues. The RTB statutes require that of the Bank’s 13 board members, 3 be elected by stockholders of eligible cooperative borrowers, and 3 be elected by stockholders of eligible commercial borrowers. The statutory qualifications for board service at the other three government-created organizations included in our study (TVA, FDIC, and Canada Post) were similar to the Postal Service’s qualifications in that they were generally nonrestrictive. For example, requirements for board membership at FDIC state only that the three appointed members must be U.S. citizens and that no more than three of the five board members may be members of the same political party. Additionally, like the Postal Service, three of the other nine government-created organizations included in our study have provisions for ex officio membership on their boards. At the Postal Service, the PMG and DPMG are ex officio members of the Board. At the RTB, there are five ex officio members—all from the Department of Agriculture. Ex officio members on the FDIC board include the Comptroller of the Currency and the Director of the Office of Thrift Supervision. Additionally, one of the presidentially appointed members also serves as the chair of the board and full-time head of FDIC. The Secretary of Transportation and AMTRAK president serve as ex officio members on AMTRAK’s board. Current literature on private sector governance suggests that some aspects of corporate governance have been undergoing changes in recent years. Some stockholders, concerned with publicized instances of excessive executive compensation, coupled with unacceptable corporate performance, are increasingly scrutinizing governance issues, including the qualifications of board members. An article in the spring 1995 issue of Business Quarterly points to a lack of meaningful qualifications for board members and a lack of needed expertise and knowledge as two areas that could signal competence problems affecting board performance. The article goes on to point out that healthy boards require, among other things, a balance of qualifications, knowledge, skills, attitudes, and experiences. Business literature suggests that now, more so than in previous eras, corporations are developing more well-defined criteria for board membership—acknowledging that various roles on the board may require various backgrounds and skills. Although conceptually it may be desirable to have board representation for all stakeholders, it presents a real challenge to do so within the Postal Service structure. The Postal Service, unlike many other corporate and corporate-like organizations, has numerous stakeholders with widely varying interests and concerns, e.g., rural patrons, inner-city patrons, business mailers, six labor unions, and three management associations. If qualification requirements are changed, one challenge for Congress will be determining what qualifications or special interests, if any, should be represented on the Board. In our discussions with current and former members of the Postal Service Board of Governors, we also identified areas where some, but less than a majority of, interviewees believed legislative attention is needed. Those areas were (1) the Board’s mission and responsibilities, (2) the Board’s relationship with postal management, (3) the Board’s accountability and performance measures, and (4) the Board’s composition and structure. Additionally, our review of pertinent literature indicated that others have expressed concerns within these same four areas as they relate to government-created organizations in general. A recurring theme in this literature focuses on accountability. For example, in April 1995, the Congressional Research Service reported that a key issue for policymakers is how to make government corporations politically accountable for their policies and operations while still giving them the necessary financial and administrative discretion to function in a commercial manner. An article in the February 1995 issue of Government Executive also expressed concern that quasi-government organizations are largely unaccountable for their actions. Some of the current and former Postal Service Board members we spoke with had the following specific concerns in these four areas. Also, where applicable, we have included as part of our discussion other related issues identified as part of our literature search. Six interviewees cited the Board’s mission and responsibilities as an area needing legislative attention. Concerns in this area centered on two issues. One issue was the Board’s uncertainty as to how far it should go in letting the Postal Service compete and operate like a private sector corporation. The other issue concerned the limited specificity in law concerning the Board’s oversight responsibilities. Four of the six interviewees said that uncertainties about how far the Postal Service should go in competing with the private sector are not helped by the Postal Service’s current legal designation. By law, the Postal Service is designated as an independent establishment in the executive branch. One interviewee characterized this situation by saying that the Postal Service’s current legal designation places it in the unenviable position of being “neither fish nor fowl,” i.e., neither an executive agency nor a private corporation. The four interviewees suggested that Congress consider clarifying the Postal Service’s legal designation, which, in turn, should provide a clearer picture of the Service’s mission. Legal status questions are not unique to the Postal Service. Such questions are being raised with regard to government-created organizations in general. Unclear legal definitions are disconcerting to some, while others use it to their advantage. For example, a fellow at both the National Academy of Public Administration (NAPA) and the Johns Hopkins Center for the Study of American Government said government-created organizations can generally choose whatever legal status best suits their purposes. He cited a 1977 incident in which the Secretary of Housing and Urban Development instructed Fannie Mae to increase its mortgage purchases in the inner cities. Fannie Mae replied that, as a private agency, its principal obligation was to its stockholders, who would object to its investing in riskier properties. A few years later, however, when the administration attempted to strip away some of Fannie Mae’s special privileges, such as its tax exemptions, Fannie Mae responded, “Congress established Fannie Mae to run efficiently as an agency, not as a fully private company.” Without those special relationships, Fannie Mae said, it would not be able to survive. While discussing the Postal Service Board’s mission and responsibilities, four of the six interviewees said the Board could benefit from more detailed guidance concerning its oversight responsibilities. They suggested that Congress consider making the law more specific. They were concerned that the broad guidance currently in law does not always provide them with a good basis for knowing Congress’ desires as the Postal Service moves toward the 21st century. Five interviewees cited the Board’s relationship with postal management as an area needing legislative attention. The most frequently cited issue related to perceptions that the position of Chief Postal Inspector did not have all the independence necessary. Four of the five interviewees said that to help ensure the Chief Postal Inspector’s independence, he/she should be appointed by the Board and be directly accountable to the Board—similar to the status of the Postal Service’s recently appointed Inspector General. They said the Chief Postal Inspector should not be appointed by, or be considered part of, management. The five interviewees also had three other suggestions for legislative consideration in this area, but no one suggestion was cited by more than two of the interviewees. The specific suggestions included the following. The Postal Service’s General Counsel should be appointed by the Board and be directly accountable to the Board—similar to the suggestion concerning the Chief Postal Inspector. The law should require that the PMG be appointed from within the Postal Service. This suggestion stemmed from the belief that the Postal Service’s size and complexity makes it very difficult for an outsider to be an effective Postmaster General during the early years of his/her appointment. The PMG and DPMG should be allowed to vote on all matters that come before the Board, except for personnel matters relating directly to them. This suggestion was made to make the PMG and DPMG a more integral part of the Board. Currently, the PMG and DMPG are prohibited from voting on some issues that come before the Board, e.g., increases in postage rates. Six interviewees cited Board accountability and performance measures as another area needing legislative attention, although no one issue was cited by more than two of the interviewees. Specific suggestions for legislative consideration included the following. Periodic peer reviews should be required as a prerequisite for continued service on the Board. The fiduciary responsibilities of Board members should be more clearly delineated in law—particularly in light of the Postal Service’s current legal status. Specific actions for which the Board will be held accountable should be clearly delineated in law. A mechanism should be established for removing nonproductive Board members. One of the interviewees, however, cautioned against such an action, citing the potential for abuse. Although the interviewees discussed accountability from a boardroom perspective, it is, in fact, a topic pertinent to all facets of organizational life. As discussed earlier, accountability is an issue being grappled with as the government examines its corporations and corporation-like organizations. Defining accountability in government begins with clearly establishing who is accountable to whom, and for what. Four interviewees cited Board composition and structure as an area needing legislative attention, but no one issue was cited by more than two of the interviewees. Specific suggestions for legislative attention included the following. The current 9-year appointments to the Board are too long and should be shortened. Appointments should be made more comparable to the private sector, where terms are generally for no more than 3 years. Board members should be prohibited from serving more than one term. Former Postal Service employees should be prohibited from serving on the Board. The process for selecting a Chair should be changed. The Chair should be appointed by the president rather than elected by the Board. The PMG should be designated, in law, as the permanent Chair of the Board. The law should be clarified to explicitly state that the PMG can be elected Chair by the members. Management should have only one, not two, seats on the Board. There were two areas discussed where none of the current or former Board members interviewed believed legislative attention is needed. These areas were (1) Board staffing and (2) the Board’s legal status. All of the interviewees agreed that Board staffing was an internal management issue and not an issue warranting legislative attention. They said the Board has the authority to hire as many staff as it needs to fulfill its responsibilities. Most individuals believed that the current staff, consisting of two professionals and two administrative staff, is adequate. However, four interviewees believed that the Board should consider expanding its staff to include experts in such areas as real estate, finance, and ratemaking. Nevertheless, they agreed that any decision to hire additional staff should be made by the Board itself, not by legislative fiat. Additionally, current and former Board members we spoke with saw no need for legislative action to change the Board’s legal status. The Board of Governors is part of the Postal Service and does not have a separate legal status. Nevertheless, discussion of the Board’s legal status prompted several interviewees to reiterate their concerns about the Postal Service’s legal status. As noted earlier, some interviewees believed that the current legal definition of the Postal Service—an independent establishment of the executive branch—is unclear and causes uncertainties about how far the Postal Service can go in competing with the private sector. The PMG and the Chairman of the Postal Service Board of Governors provided written comments on a draft of this report. The PMG said most of the issues raised in the report speak for themselves and have been discussed by the Governors and PMGs for many years. His comments also included supplemental information on compensation practices at TVA and CEO pay at nine foreign postal administrations plus the USPS. His comments are reproduced in appendix V. The Chairman of the Postal Service Board of Governors said in his written comments that the report provides valuable information on governance issues and how other boards function. He also said many of the issues raised in the report have been discussed by the various Boards of the Postal Service over the years. His comments are reproduced in appendix VI. Program personnel at the nine other organizations included in this report for comparison purposes were provided copies of a draft of appendix IV for their review and comment. The program personnel at two of the nine (AMTRAK and CPB) organizations said the information was accurate as presented. Program personnel at the other seven organizations either provided additional information or made technical suggestions that have been incorporated into the appendix as appropriate. We are sending copies of this report to the Ranking Minority Member of your Subcommittee, the Chair and Ranking Minority Member of the Senate oversight subcommittee, the Postal Service Board of Governors, the PMG, and other interested parties. Copies will also be made available to others upon request. Major contributors to this report are listed in appendix VII. If you have any questions about the report, please call me on (202) 512-4232. The organizations we selected to compare with the Postal Service Board of Governors are generally identified as wholly-owned government corporations, mixed-ownership government corporations, GSEs, or federally created private corporations. In addition, we compared the boards of two selected foreign postal administrations with the Board of Governors of the Postal Service. Although there is no authoritative definition for the term “government corporation,” there are certain characteristics common to government corporations that were identified by President Truman in 1948 and that have been referred to and accepted over the years by public administration experts. According to President Truman, a corporate form of organization is appropriate for the administration of government programs that are predominately of a business nature, produce revenue and are potentially self-sustaining, involve a large number of business-type transactions with the public, and require a greater flexibility than the customary type of appropriations budget ordinarily permits. In 1981, NAPA defined a wholly-owned government corporation as a corporation pursuing a government mission assigned in its enabling statute, financed by appropriations, with assets owned by the government and controlled by board members or an administrator appointed by the president or a department secretary. It defined a mixed-ownership government corporation as a corporation with both government and private equity, with assets owned and controlled by board members selected by both the president and private stockholders, and as usually intended for transition to the private sector. Of the organizations selected for this study, TVA and the RTB are wholly-owned government corporations, and FDIC and AMTRAK are generally considered to be mixed-ownership government corporations. GSEs are federally established, privately owned corporations designed to increase the flow of credit to specific economic sectors. GSEs typically receive their financing from private investment, and the credit markets perceive that GSEs have implied federal financial backing. GSEs issue capital stock and short- and long-term debt instruments, issue mortgage-backed securities, fund designated activities, and collect fees for guarantees and other services. GSEs generally do not receive government appropriations. Fannie Mae and Freddie Mac are two examples of GSEs. The CPB is a federally created private, nonprofit corporation. It does not consider itself to be a government corporation or a GSE. However, it does receive at least some of its operating funds from yearly federal appropriations and has been considered to be a government corporation by others. 1. Are you satisfied with the statutory relationship between the PMG and the Board? If not, why? Should anything be changed in law/regulation? 2. Aside from the statutory/regulatory relationship between the PMG and the Board, are there other issues dealing with the relationship that you would like to see addressed? If so, please explain your position and cite examples. 1. Are you satisfied with the Board’s statutory relationship with PRC? If not, why? Should anything be changed in law/regulation? 2. Do you believe PRC provides the Board with sufficient information to meet the Board’s needs? Is information provided in a timely manner? If the information is not sufficient and/or timely, what changes do you believe are needed? 1. How does the Board get involved in setting goals and developing implementation strategies for the Postal Service? 2. Are you satisfied with the Board’s mission and responsibilities as specified in legislation? If not, why? 3. Are you satisfied with the Board’s mission and responsibilities as further defined by the Bylaws? If not, please cite examples and discuss any changes you believe are needed. 1. Are there any statutory authorities the Board does not have that you believe it should have? If so, please explain. 2. Are there any statutory authorities the Board has that need to be expanded or contracted? 1. Is the Board’s legal status satisfactory, or are legislative changes needed? If so, what changes are needed and why? Provide examples supporting the need for any change in the legal status of the Board. 1. Do you believe there are Board accountability issues that need to be addressed with regard to the Board as a collective unit? If so, what are those issues? 2. To whom are individual Board members accountable? 3. Are there accountability issues that need to be addressed with regard to the performance of individual Board members? If so, what are those issues? 4. In general, how are ethical or conflict of interest issues addressed? Are you satisfied with the guidance available in this area? Board’s Compensation 1. Is the new pay level adequate? If not, please explain why. 2. Do you believe benefits are adequate in relation to other boards (other board directors may receive stock options, health insurance, life insurance, etc.)? If not, how should they be adjusted? 3. Are travel reimbursements adequate? If not, where do they fall short? 1. Do the current size and composition of the Board allow the Board to effectively perform its duties? 2. Are the qualifications/restrictions for Board membership adequate, or should more specific qualifications be spelled out in legislation? If more specific qualifications are needed, please state why and cite examples of how more specific qualifications would have been helpful in past situations. 3. Do you serve on any other boards? If so, how many and which ones? Do you believe there should be a limit on the number of boards on which members can serve? 4. Should service on the Postal Service Board of Governors be changed from part time to full time? Explain. 1. Do you believe that the Board has sufficient staff resources? If not, what additional staff are needed (numbers, qualifications, etc.)? Private, nonprofit corporation organized under D.C. law “Parent Crown Corporation” (fully owned by the Crown) Federal government business enterprise (fully owned by the Commonwealth Government of Australia) 49 U.S.C. 240101 et seq. Australia Postal Corporation Act of 1989 and amendments [CPCA 1980-81-82-83, c.54 and amendments] (continued) To provide postal services to bind the nation through the personal, educational, literary, and business correspondence of the people; and to provide prompt, reliable, and efficient service to patrons in all areas and to render postal services to all communities. To provide stability in the secondary market for residential mortgages; respond appropriately to the private capital market; provide ongoing assistance to the secondary market for residential mortgages (including activities relating to mortgages on housing for low- and moderate-income families involving a reasonable economic return that may be less than the return earned on other activities) by increasing the liquidity of mortgage investments and improving the distribution of investment capital available for residential mortgage financing; and promote access to mortgage credit throughout the nation (including central cities, rural areas, and underserved areas) by increasing the liquidity of mortgage investments and improving the distribution of investment capital available for residential mortgage financing. To provide stability in the secondary market for residential mortgages; respond appropriately to the private capital market; provide ongoing assistance to the secondary market for residential mortgages (including activities relating to mortgages on housing for low- and moderate-income families involving a reasonable economic return that may be less than the return earned on other activities) by increasing the liquidity of mortgage investments and improving the distribution of investment capital available for residential mortgage financing; and promote access to mortgage credit throughout the nation (including central cities, rural areas, and underserved areas) by increasing the liquidity of mortgage investments and improving the distribution of investment capital available for residential mortgage financing. To improve the navigability of the Tennessee River; provide for flood control, reforestation, the proper use of marginal lands, and the agricultural and industrial development of the Tennessee Valley; provide for the national defense; and provide an ample supply of electric power to seven-state region in the southeastern United States. To insure deposits of banks and savings associations. To provide intercity and commuter rail passenger transportation in the United States. To facilitate the full development of public telecommunications. To establish and operate a postal service for the collection, transmission, and delivery of messages, information, funds, and goods both within Canada and between Canada and places outside Canada; manufacture and provide such products and to provide such services as are, in the opinion of the Corporation, necessary or incidental to the postal service provided by the Corporation; and provide to or on behalf of departments and agencies of, and corporation owned, controlled, or operated by, the Government of Canada or a provincial, regional, or municipal government in Canada or to any person services that, in the opinion of the Corporation, are capable of being conveniently provided in the course of carrying out the other objects of the Corporation. To supply postal services within Australia and between Australia and places outside Australia. Australia Post is also able to carry on any business or activity, either in Australia or overseas, relating to the supply of postal service. Australia Post may also carry on any business or activity that is conveniently carried on by use of resources not immediately required in performing the principal function or in the course of performing the principal function. (continued) Postal rates are determined in conjunction with a Postal Rate Commission (PRC) recommendation. The Governors may approve a PRC-recommended change; accept recommended change under protest; reject or, in limited circumstances, modify recommended change. The Board has unilateral authority to conduct business operations, but generally day-to-day business activities are delegated to management subject to provisions of charter. Freddie Mac’s charter explicitly provides that Freddie Mac has full discretion in setting prices and other business operations. There are no regulatory or other external limits on the authority of Freddie Mac’s management to set prices for mortgages it purchases and the securities it issues. TVA’s board has exclusive authority to set prices, rates, etc., for the products or services that TVA sells. The TVA act contains standards for determining appropriate levels for TVA’s electric power rates but commits the fixing of those rates to the discretion of the TVA board and precludes judicial review thereof. PRC is an independent agency that acts upon requests from the USPS or in response to complaints filed by interested parties. Among its major responsibilities are to submit recommended decisions to the USPS on postage rates and fees and mail classifications, issue advisory opinions to the USPS on proposed nationwide changes in postal services, and submit recommendations for changes in the mail classification schedule. With one exception, no statutory provision authorizes another person, board, or commission to set or review prices. Yes, Governors set pay of the PMG, subject to limitations of 39 U.S.C. 1003(a); i.e., salary cannot exceed the rate for Level I of the Executive Schedule. Yes, the Board is authorized to fix compensation for the officers of the Corporation. Yes, the Board of Directors determines compensation of officers. Yes, the board sets compensation for all TVA employees. Salary for regular employees may not exceed that received by board members. Yes, subject to restrictions of Federal Retirement and Workers Compensation Laws. Yes, the Board is authorized to fix compensation for officers of the Corporation. Yes, the Board of Directors determines benefits of officers. Yes, the board sets compensation, including benefits, for all appointees. The board has unilateral authority to grant or deny deposit insurance to financial institutions. Board decisions are not subject to approval by another regulatory authority. The board has unilateral authority to set prices, rates, etc., without review or approval by an independent regulatory authority. The board has the authority to approve prices; however, it has no products to sell. Specifically, CPB is not a commodity business. The board sets prices for all products and services. The board must notify the Minister of any intention to alter the price of the standard postal rate (the reserved service), and the Minister has the opportunity to disallow it. The board, directly or indirectly through delegation of authority to the President/CEO, oversees virtually all rate-making decisions. The board, in delegating its authority, has established that (1) all rates established through regulation (i.e., noncompetitive products) require approval of the board, (2) all generic rates (rates available to anyone meeting specified terms and conditions) established outside of regulation require the President/CEO’s approval, and (3) all sales agreements (generic or non-generic) are subject to the board’s delegation of authority instrument and related processes. The Australian Competition and Consumer Council, while having no direct authority over the price, has the opportunity to consider any proposal and make its views known to the Minister as part of his/her consideration of proposed price alterations. No, the pay of the Chair of the Board (“CEO”) is determined by reference to Federal Statutes—Level III of the Executive Schedule. Yes. Yes, but president may not be compensated at an annual rate of pay that exceeds the rate of basic pay in effect from time to time for Level I of the Executive Schedule under Sec. 5312 of Title 5. No, the CEO’s pay is set by Governor in Council. Yes. Yes, under 49 U.S.C. § 24303(b). Yes. No, benefits are set by the Governor in Council. Yes. (continued) $800,000 salary; $833,263 bonus; and $23,102 other annual compensation, as well as long-term compensation in the form of stock awards and securities options. $865,000 salary; $394,000 bonus; and $100,688 other annual compensation, as well as long-term compensation in the form of restricted stock awards and securities options. Not applicable. The Chair’s total pay and benefits compensation for FY95 was $147,014.74. The CEO’s compensation does not exceed the Federal Executive Level I salary scale. This is not considered public information. Not publicly available. The CEO receives the following benefits: health insurance, an employer-paid retirement income plan, a 401(k) retirement savings plan, life and accidental death and disability insurance, split dollar life insurance, business travel accident insurance, short-term and long-term disability benefits, United States Railroad retirement benefits as well as paid vacation and sick leave, rail pass privileges, educational assistance, parking, and relocation benefits. However, the CPC Board contact provided a range of salary that is public: $189,000 to $233,000 (U.S. dollars). The benefit package is worth about 20% of salary. Terms and conditions are set at a level that takes into account both public and private sector considerations. Prior consultation with the Remuneration Tribunal is part of the process of establishing a package. (continued) For Governors, salary and reimbursable expenses determined by statute. Board advised by outside experts on appropriate levels of compensation based on payments made by comparable businesses. Salary of TVA Board Chair is established under Level III of the Executive Schedule Salary. Governors determine pay of PMG and DPMG within legislatively established parameters. Pursuant to a resolution adopted by the board, the 15 outside directors receive an annual retainer, annual award of stock options and restricted stock, and meeting attendance fees. They do not receive salaries or other employee benefits. The salary of the other two members of the TVA board is established under Level IV of the Executive Schedule. Benefits available to board members are those generally available to federal employees, including presidential appointees, by statute. Governors’ salaries are set by legislation. From 1970 to 1995, there were no salary increases. In 1996, salaries were increased by legislation. Generally annual adjustments. PMG and DPMG salaries are set by the Governors, subject to a pay cap. Board is advised by outside experts on appropriate levels of compensation based on payments made by comparable businesses. The Chairman and two directors of the TVA board are positions covered by level III and level IV, respectively, of the Executive Schedule (5 U.S.C. §5314 and §5315). Increases in pay are done through the legislative process. Increases in pay are done through the legislative process. Board members receive cash fees and stock awards as their compensation. Adjustments for inflation are not included in the criteria for setting compensation. However, from time to time Freddie Mac reviews the compensation package for board members to ensure that it remains competitive. Full time for PMG and DPMG. Part time for Governors. Full time for 3 management employees and part time for rest of board, who are outside management directors. Part time. Board members provide service year round. There is no fixed-hour requirement for service. Salary is determined under federal statutes —Level III of the Executive Schedule for Chairperson and Level IV for other members. Pay and expenses are set by statute. By statute. Determined by the Governor in Council. Determined by an independent central remuneration tribunal. Board members’ salaries are set by statute (5 U.S.C. §5314 and §5315) under appropriate executive levels. Changes would be done legislatively. Adjustment would require legislative change. Salaries and benefits may be changed by an act of Congress at any time. Increases are not made on a regular basis. They are made following a recommendation from the Minister responsible for Canada Post to the Governor in Council. The last adjustment was made by the Governor in Council in 1990. The Remuneration Tribunal regularly examines remuneration levels and will consult with the board on specific issues. Full time for Managing Director and part time for other Directors. (continued) $30,000 plus $300 a day for not more than 42 days of meetings per year for Governors. $23,000 retainer annually, plus $1,000 for attending each board or board committee meeting. $20,000 retainer annually, prorated based on the quarter in which they were appointed. Committee chairs received an additional $500 for each committee meeting they chaired. Directors also were paid $1,000 for attendance at each meeting of the board or any board committee meeting and were reimbursed for out-of-pocket costs of attending such meetings. Board service is full time; therefore, no daily meeting attendance fees paid. Additionally, each nonmanagement director has restricted common stock under the Fannie Mae Restricted Stock Plan for Directors and stock options under the Fannie Mae Stock Compensation Plan of 1993. Fannie Mae officers who serve on the Board of Directors do not receive compensation for serving as directors other than the compensation they receive as Fannie Mae officers. Fannie Mae officers are not eligible to participate in the Fannie Mae Restricted Stock Plan for Directors and are not eligible to receive nonmanagement director stock options under the Fannie Mae Stock Compensation Plan of 1993. Each board committee chairman also received an annual retainer of $2,500. Pursuant to the 1995 Directors’ Stock Compensation Plan,each Director was granted options to purchase 2,400 shares of the Corporation’s common stock and received shares of restricted stock having a fair market value of approximately $10,000 on the date of the award. (9 Governors plus the PMG and DPMG) $115,700 for other board members $437 (U.S. dollars) for physical attendance at board or board committee meetings. Board service is full time; therefore, no daily meeting attendance fees paid. Board members receive $300 per diem for attending board and committee meetings or conducting other official business of the Corporation. $150 a day while attending meetings or while engaged in duties related to such meetings or other activities of the board, including travel time. Directors—$27,650 (U.S. dollars) Deputy Chair—$37,750 (U.S. dollars) The $437 (U.S. dollars) is also payable for each full day of travel to and from the meeting. Chair—$58,700 (U.S. dollars) The $300 per diem is a fixed statutory compensation level that has been in place since the board was created. No board member shall receive compensation of more than $10,000 in any fiscal year. No daily meeting attendance fees paid. Board members are paid an annual retainer ($4,080 to $5,100 U.S. dollars) that is set by Order-in-Council (i.e., by Her Majesty’s Government) on the recommendation of the responsible Minister. Up to 9 Directors. (6 current members and 3 vacancies). (9 Directors plus the Chair and President). (continued) 9 Governors appointed by the President of the United States, by and with the consent of the Senate. 13 members elected by shareholders. 5 appointed by the President of the United States. 5 members appointed by President of the United States. 13 elected by voting common stockholders. Appointed by the President of the United States with the advice and consent of the Senate. Governors appoint PMG. Governors and PMG appoint DPMG. 3 appointed by the President of the United States, by and with the advice and consent of the Senate. 3 members are appointed by President of the United States and confirmed by the Senate (representing labor, state governors, and business). Appointed by the President of the United States with the advice and consent of Senate. 1 member shall be the Comptroller of the Currency. 9 Directors are appointed by the Minister with the approval of the Governor in Council. The Governor in Council appoints the Chair and President/CEO . Directors are appointed by the Governor-General on the nomination of the Minister for Communications and the Arts. 1 shall be the Director of the Office of Thrift Supervision. 2 members represent commuter authorities and are selected by the President from lists drawn up by those authorities. The Managing Director is appointed by the Board of Directors. 2 are selected by the Corporation’s preferred stockholder— the Department of Transportation. The Minister must consult with the Chair of Post prior to appointing Directors, and one Director must be recognized as having an appropriate understanding of the interests of employees. The Secretary of Transportation and Amtrak President serve by virtue of their offices. (continued) 9 years for the 9 Governors. A Governor may continue to serve up to 1 year after term expires while awaiting a successor to be named. 9 year fixed terms are staggered so that one begins every 3 years on May 18 (e.g., 1990, 1993, and 1996). PMG serves at pleasure of Governors. DPMG serves at pleasure of Governors and PMG. 6 years for each appointed member, but they may continue to serve after the expiration of their terms of office until a successor has been appointed and qualified. 3 members appointed by the President of the United States and confirmed by the Senate (representing labor, state governors, and business) serve for 4 years. Not to exceed 3 years for Directors. As determined by Governor in Council for Chair and President/CEO. Up to 5 years for Directors as specified in the instrument of appointment. 6 years, except as provided in section 5(c) of the Public Telecommunications Act of 1992. Others serve during their terms as Comptroller of Currency and Director of the Office of Thrift Supervision. 2 members representing commuter authorities serve for 2 years. 2 members selected by the Corporation’s preferred stockholder, the Department of Transportation, serve for 1 year. Any member whose term has expired may serve until such member’s successor has taken office, or until the end of the calendar year in which such member’s term has expired, whichever is earlier. 2 ex officio members (the Secretary of Transportation and the President of AMTRAK) serve as members as long as they remain in their positions as Secretary of Transportation and President of AMTRAK. Any member appointed to fill a vacancy occurring prior to the expiration of the term for which such member’s predecessor was appointed shall be appointed for the remainder of such term. (continued) Qualifications are not prescribed in legislation. No restrictions in legislation regarding who can be PMG and DPMG. Each member must be a U.S. citizen and profess a belief in the feasibility and wisdom of the TVA Act of 1933. No political recommendations may be considered when selecting PMG and DPMG. Governors are chosen to represent the public interest generally and cannot be representatives of specific interests using the USPS. Not more than 5 of the Governors can be members of the same political party. 1 from an organization that represents consumer interests for not less than 2 years, or 1 person who has demonstrated a career commitment to the provision of housing for low-income households. 1 from an organization that has represented consumer interests for not less than 2 years, or 1 person who has demonstrated a career commitment to the provision of housing for low-income households. No officer or employee of the United States may serve concurrently as a Governor. Members are prohibited from having a financial interest in any public utility corporation engaged in the business of distributing and selling power to the public or in any corporation engaged in the manufacture, selling, or distribution of fixed nitrogen or fertilizer, or any ingredients thereof; nor shall any member have any interest in any business that may be adversely affected by the success of the corporation as producer of concentrated fertilizers or as a producer of electric power. A Governor may hold any other office or employment not inconsistent or in conflict with his duties, responsibilities, and powers as an officer of the USPS. Also, board members are prohibited, during their tenure in office, from engaging in any other business. Appointed board members must be U.S. citizens, and not more than 3 of the members may be members of the same political party. Directors must be U.S. citizens. No more than 6 of the appointed members may be from the same political party. None specified. Secretary of Transportation serves as Board member by virtue of his office. The board must have a mix of skills appropriate for the Corporation. One member is to have an appropriate understanding of the interests of employees. Amtrak’s President serves as the Chairman of the board by virtue of his office. 3 members are appointed by the President of the United States and confirmed by the Senate (representing labor, state governors, and business). The 9 appointed members shall be selected from such fields as education, cultural and civic affairs, or the arts—including radio and television—and represent various regions of the nation, professions, and occupations, and represent various kinds of talent and experience appropriate to the function and responsibilities of CPB. 2 members represent commuter authorities and are selected by the President of the United States from lists drawn up by those authorities. 2 members are selected by the Corporation’s preferred stockholder. Of these appointed members, 1 shall be selected from among individuals who represent the licensees and permittees of public television stations, and 1 shall represent the licensees and permittees of public radio stations. (continued) Nothing specified in statute or regulation. A director should not be renominated after having served for 10 years or longer, although nominating committee may for good reason propose the renomination of such a director. No director should be proposed for renomination after 15 years of Board service. No, but stockholder-elected directors must retire at age 72. None. Board of Governors. Board of Directors. Board of Directors. Board of Directors. Elected by the Governors from among the members of the Board. Elected by Board. By annual vote of the Board of Directors. Designated by the President of the United States. None. Yes. No member of the board shall be eligible to serve in excess of 2 consecutive full terms. No. Law states that a Director may, on the expiration of his/her term of office, be reappointed to the board. None specified in enabling legislation. Directors have been reappointed. Board of Directors. Board of Directors. Board of Directors. Board of Directors. Board of Directors. One of the appointed members shall be designated by the President of the United States, by and with the advice and consent of the Senate, to serve as Chair of the Board of Directors for a term of 5 years. President serves as Chair. Members of the board annually elect one of their members to be Chair and elect one or more of their members as a Vice Chair or Vice Chairs. The chair is appointed by the Governor in Council. Chair is selected by the Minister, with appointment made by Governor General. Until privatization (privatization will occur when 51 percent of the class A stock issued to the United States and outstanding at any time after September 30, 1995, has been fully redeemed and retired). Some organizations provided data on pay and benefits, and others provided information only on pay. The PMG, in commenting on a draft of this report, provided additional information on CEO pay at nine foreign postal administrations, including Canada Post and Australia Post. See appendix V for additional information. We did not independently verify the information provided. Jill P. Sayre, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO obtained information on Postal Service governance issues, focusing on: (1) any major areas of concern, including specific issues, that current and former members of the Postal Service Board of Governors have about the Board and their suggested legislative changes; (2) major characteristics of the Postal Service's Board of Governors with the characteristics of selected boards of their government-created corporations or corporation-like organization to identity similarities and dissimilarities, particularly as they relate to the major areas of concern identified by current and former Board members; and (3) information on governance issues that might be helpful to the House Committee on Government Reform and Oversight, Postal Service Subcommittee as it deliberates Postal Service reform. GAO noted that: (1) a majority of current and former members of the Postal Service Board of Governors GAO interviewed said legislative attention was needed in three broad areas; (2) however, there was not a consensus among the members on what the specific issues were within each area of concern, or what legislative changes should be considered to address their concerns; (3) the major areas of concern were the Board's authority, Board members' compensation, and Board members' qualifications; (4) within these broad areas of concern, the most frequently cited issues were: (a) the limitations on the Board's authority to establish postage rates; (b) the inability of the Board to pay the Postmaster General more than the rate for level I of the Executive Schedule; (c) the Board's lack of pay comparability with the private sector; and (d) qualification requirements that are too general to ensure that Board appointees possess the kind of experience necessary to oversee a major government business; (5) GAO's comparison of the Board of Governors with nine other boards of government-created organizations showed both similarities and dissimilarities; (6) similarities indicate that these boards were created to function much like private-sector corporate boards; (7) dissimilarities, however, reflect the amount of flexibility the boards were given to operate like private sector corporations; (8) GAO also identified four broad areas where some of the interviewees, but less than a majority, believed legislative attention was needed; (9) these areas were the Board's mission and responsibilities, the Board's relationship with Postal management, the Board's accountability and performance measures, and Board composition; (10) the most frequently cited issues in these areas were: (a) uncertainties as to how far the Board should go in letting the Postal Service compete and operate like a private-sector corporation; (b) the limited specificity in law concerning the Board's oversight responsibilities, and (c) perceptions that the Chief Postal Inspector may not have all the independence the position requires; and (11) the interviewees' concerns about many issues, such as Board authority, accountability, and how far to let the Postal Service go in competing and operating like a private sector corporation, are issues being grappled with in the larger context of streamlining government operations.
You are an expert at summarizing long articles. Proceed to summarize the following text: PPACA directed each state to establish and operate a state-based health insurance marketplace by January 1, 2014. These marketplaces were intended to provide a seamless, single point-of-access for individuals to enroll in private health plans, apply for income-based financial assistance established under the law, and, as applicable, obtain an eligibility determination for other health coverage programs, such as Medicaid or the Children’s Health Insurance Program (CHIP). In states electing not to establish and operate a marketplace, PPACA required the federal government to establish and operate a marketplace in that state, referred to as a federally facilitated marketplace. Thus, the federal government’s role with respect to a marketplace for any given state—in particular whether it established a marketplace or oversees a state-based marketplace—was dependent on a state decision. For plan year 2016, 13 states had a state-based marketplace, 4 had a state- based marketplace using the federal marketplace platform, 27 had a federally facilitated marketplace, and 7 had a state partnership marketplace. Figure 1 shows the states and the types of marketplaces they use. PPACA requires that CMS and the states establish automated systems to facilitate the enrollment of eligible individuals in appropriate health care coverage. Many systems and entities exchange information to carry out this requirement. The CMS Center for Consumer Information and Insurance Oversight (CCIIO) has overall responsibility for the federal systems supporting Healthcare.gov and for overseeing state-based marketplaces, which vary in the extent to which they exchange information with CMS. Other entities also connect to the network of systems that support enrollment in Healthcare.gov. Figure 2 shows the major entities that exchange data in support of marketplace enrollment and how they are connected. Regardless of whether a state established and operated its own marketplace or used the federally facilitated marketplace, PPACA and HHS regulations and guidance require every marketplace to have capabilities that enable them to carry out four key functions, among others: Eligibility and enrollment. The marketplace must enable individuals to assess and determine their eligibility for enrollment in health care coverage. In addition, the marketplace must provide individuals the ability to obtain an eligibility determination for other federal health care coverage programs, such as Medicaid and CHIP. Once eligibility is determined, individuals must be able to apply for and enroll in applicable coverage options. Plan management. The marketplace is to provide a suite of services for state agencies and health plan issuers to facilitate activities such as submitting, monitoring, and renewing qualified health plans. Financial management. The marketplace is to facilitate payments of advanced premium tax credits to health plan issuers and also provide additional services such as payment calculation for risk adjustment analysis and cost-sharing reductions for individual enrollments. Consumer assistance. The marketplace must be designed to provide support to consumers in completing an application, obtaining eligibility determinations, comparing coverage options, and enrolling in health care coverage. The data hub is a CMS system that acts as a single portal for exchanging information between the federally facilitated marketplace and CMS’s external partners, including other federal agencies, state-based marketplaces, other state agencies, other CMS systems, and issuers of qualified health plans. The data hub was designed as a “private cloud” service supporting the following primary functions: Real-time eligibility queries. The federally facilitated marketplace, state-based marketplaces, and Medicaid/CHIP agencies transmit queries to various external entities, including other federal agencies, state agencies, and commercial verification services, to verify information provided by applicants, such as immigration and citizenship data, income data, individual coverage data, and incarceration data. Transfer of application and taxpayer information. The federally facilitated marketplace or a state-based marketplace transfers application information to state Medicaid/CHIP agencies. Conversely, state agencies also use the data hub to transfer application information to the federally facilitated marketplace. In addition, the Internal Revenue Service (IRS) transmits taxpayer information to the federally facilitated marketplace or a state-based marketplace to support the verification of household income and family size when determining eligibility for advance payments of the premium tax credit and cost-sharing reductions. Exchange and monitoring of enrollment information with issuers of qualified health plans. The federally facilitated marketplace sends enrollment information to appropriate issuers of qualified health plans, which respond with confirmation messages back to CMS when they have effectuated enrollment. State-based marketplaces also send enrollment confirmations, which CMS uses to administer the advance premium tax credit and cost-sharing reductions and to track overall marketplace enrollment. Further, CMS, issuers of qualified health plans, and state-based marketplaces exchange enrollment information on a monthly basis to reconcile enrollment records. Submission of health plan applications. Issuers of qualified health plans submit “bids” for health plan offerings for validation by CMS. Connections between external entities and the data hub are made through an Internet protocol that establishes an encrypted system-to- system web browser connection. Encryption of the data transfer between the two entities is designed to meet NIST standards, including Federal Information Processing Standard 140-2. This type of connection is intended to ensure that only authorized systems can access the data being exchanged, thus safeguarding against cyber attacks attempting to intercept the data. The data hub is designed to not retain any of the data that it transmits in permanent storage devices, such as hard disks. According to CMS officials, data are stored only momentarily in the data hub’s active memory. The entities that transmit the data are responsible for maintaining copies of their transmissions in case the data need to be re- transmitted. As a result, CMS does not consider the data hub to be a repository of personally identifiable information. State-based marketplaces generally perform the same functions that the federally facilitated marketplace performs for states that do not maintain their own marketplace. However, in certain cases, known as state partnership marketplaces, states may elect to perform one or both of the plan management and consumer assistance functions while the federally facilitated marketplace performs the rest. The specific functions performed by each partner vary from state to state. Figure 3 shows what functions are performed by each type of marketplace. Regardless of whether a state operates its own marketplace, most states need to connect their state Medicaid and CHIP agencies to either their state-based marketplace or the federally facilitated marketplace to exchange data about enrollment in these programs. Such data exchanges are generally routed through the CMS data hub. In addition, states may need to connect with the IRS (also through the data hub) in order to verify an applicant’s income and family size for the purpose of determining eligibility for or the amount of the advance payment of the premium tax credit and cost-sharing reductions. Finally, state-based marketplaces are to send enrollment confirmations to the federally facilitated marketplace so that CMS can administer advance payments of the premium tax credit and cost-sharing payments and track overall marketplace enrollment. Federal laws and guidance specify requirements for protecting federal systems and data. This includes systems used or operated by a contractor or other organization on behalf of a federal agency. The Federal Information Security Modernization Act of 2014 (FISMA) requires each agency to develop, document, and implement an agency-wide information security program to provide security for the information and information systems that support operations and assets of the agency, including those provided or managed by another agency, contractor, or another organization on behalf of an agency. FISMA assigns certain responsibilities to NIST, which is tasked with developing, for systems other than national security systems, standards and guidelines that must include, at a minimum, (1) standards to be used by all agencies to categorize all of their information and information systems based on the objectives of providing appropriate levels of information security, according to a range of risk levels; (2) guidelines recommending the types of information and information systems to be included in each category; and (3) minimum information security requirements for information and information systems in each category. Accordingly, NIST has developed a risk management framework of standards and guidelines for agencies to follow in developing information security programs. Relevant publications include: Federal Information Processing Standard 199, Standards for Security Categorization of Federal Information and Information Systems, requires agencies to categorize their information systems as low- impact, moderate-impact, or high-impact for the security objectives of confidentiality, integrity, and availability. The potential impact values assigned to the respective security objectives are the highest values from among the security categories that the agency identifies for each type of information resident on those information systems. Federal Information Processing Standard 200, Minimum Security Requirements for Federal Information and Information Systems, specifies minimum security requirements for federal agency information and information systems and a risk-based process for selecting the security controls necessary to satisfy these minimum security requirements. Federal Information Processing Standard 140-2, Security Requirements for Cryptographic Modules, requires agencies to encrypt agency data, where appropriate, using NIST-certified cryptographic modules. This standard specifies the security requirements for a cryptographic module used within a security system protecting sensitive information in computer and telecommunication systems (including voice systems) and provides four increasing, qualitative levels of security intended to cover a wide range of potential applications and environments. NIST Special Publication 800-53, Security and Privacy Controls for Federal Information Systems and Organizations, provides a catalog of security and privacy controls for federal information systems and organizations and a process for selecting controls to protect organizational operations, assets, individuals, other organizations, and the nation from a diverse set of threats including hostile cyber attacks, natural disasters, structural failures, and human errors. The guidance includes privacy controls to be used in conjunction with the specified security controls to achieve comprehensive security and privacy protection. NIST Special Publication 800-37, Guide for Applying the Risk Management Framework to Federal Information Systems: A Security Life Cycle Approach, explains how to apply a risk management framework to federal information systems, including security categorization, security control selection and implementation, security control assessment, information system authorization, and security control monitoring. NIST Special Publication 800-160, Systems Security Engineering: An Integrated Approach to Building Trustworthy Resilient Systems (draft), recommends steps to help develop a more defensible and survivable IT infrastructure—including the component products, systems, and services that compose the infrastructure. While agencies are not yet required to follow these draft guidelines, they establish a benchmark for effectively coordinating security efforts across complex interconnected systems, such as those that support Healthcare.gov and state-based marketplaces. While agencies are required to use a risk-based approach to ensure that all of their IT systems and information are appropriately secured, they also must adopt specific measures to protect personally identifiable information (PII) and must establish programs to protect the privacy of individuals whose PII they collect and maintain. Agencies that collect or maintain health information also must comply with additional requirements. In addition to FISMA, major laws and regulations establishing requirements for information security and privacy in the federal government include the following: The Privacy Act of 1974 places limitations on agencies’ collection, access, use, and disclosure of personal information maintained in systems of records. The act defines a “record” as any item, collection, or grouping of information about an individual that is maintained by an agency and contains his or her name or another individual identifier. It defines a “system of records” as a group of records under the control of any agency from which information is retrieved by the name of the individual or other individual identifier. The Privacy Act requires that when agencies establish or make changes to a system of records, they must notify the public through a system of records notice in the Federal Register that identifies, among other things, the categories of data collected, the categories of individuals about whom information is collected, the intended “routine” uses of data, and procedures that individuals can use to review and contest its content. The E-Government Act of 2002 strives to enhance protection for personal information in government information systems by requiring that agencies conduct, where applicable, a privacy impact assessment for each system. This assessment is an analysis of how personal information is collected, stored, shared, and managed in a federal system. More specifically, according to Office of Management and Budget (OMB) guidance, a privacy impact assessment is an analysis of how information is handled to (1) ensure handling conforms to applicable legal, regulatory, and policy requirements regarding privacy; (2) determine the risks and effects of collecting, maintaining, and disseminating information in identifiable form in an electronic information system; and (3) examine and evaluate protections and alternative processes for handling information to mitigate potential privacy risks. Agencies must conduct a privacy impact assessment before developing or procuring IT that collects, maintains, or disseminates information that is in an identifiable form or before initiating any new data collections involving identifiable information that will be collected, maintained, or disseminated using IT if the same questions or reporting requirements are imposed on 10 or more people. The Health Insurance Portability and Accountability Act of 1996 establishes national standards for electronic health care transactions and national identifiers for providers, health insurance plans, and employers, and provides for the establishment of privacy and security standards for handling health information. The act calls for the Secretary of HHS to adopt standards for the electronic exchange, privacy, and security of health information, which were codified in the Security and Privacy Rules. The Security Rule specifies a series of administrative, technical, and physical security practices for “covered entities” and their business associates to implement to ensure the confidentiality of electronic health information. The Privacy Rule reflects basic privacy principles for ensuring the protection of personal health information, such as limiting uses and disclosures to intended purposes, notification of privacy practices, allowing individuals to access their protected health information, securing information from improper use or disclosure, and allowing individuals to request changes to inaccurate or incomplete information. The Privacy Rule establishes a category of health information, called “protected health information,” which may be used or disclosed to other parties by “covered entities” or their business associates only under specified circumstances or conditions, and generally requires that a covered entity or business associate make reasonable efforts to use, disclose, or request only the minimum necessary protected health information to accomplish the intended purpose. CMS’s CCIIO has overall responsibility for developing and implementing policies and rules governing state-based marketplaces, overseeing the implementation and operations of state-based marketplaces, and administering federally facilitated marketplaces for states that elect not to establish their own. State-based marketplaces and the federal government must share data and otherwise integrate IT systems for the implementation and operation of the marketplaces. According to federal regulations, state-based marketplaces are responsible for protecting and ensuring the confidentiality, integrity, and availability of marketplace enrollment information, and must also establish and implement certain privacy and security standards. CMS oversees state-based marketplaces and compliance with those standards. Additionally, federal statutes, guidance, and standards require the federal government to protect its IT systems and the information contained within these systems. As part of its oversight responsibilities, CMS developed a suite of documents—known as the Minimum Acceptable Risk Standards for Exchanges (MARS-E)—that addresses security and privacy standards for the state-based marketplaces. The documents define a risk-based security and privacy framework for state-based marketplaces and their contractors to use in the design and implementation of their IT systems and provide guidance regarding the minimum level of security controls that must be implemented to protect information and information systems. The MARS-E is designed to facilitate marketplaces’ compliance with FISMA, the Health Insurance Portability and Accountability Act of 1996, and the Privacy Act of 1974, among other relevant laws. Over the past 2 years, we have issued a number of reports highlighting challenges that CMS has faced in implementing and operating the health insurance marketplaces’ IT systems. In September 2014, we reported that while CMS had taken steps to protect the security and privacy of data processed and maintained by the complex set of systems and interconnections that support Healthcare.gov, weaknesses remained in both the processes used for managing information security and privacy as well as the technical implementation of IT security controls. Specifically, we noted that Healthcare.gov and the related systems had been deployed despite incomplete security plans and privacy documentation, incomplete security tests, and the lack of an alternate processing site to avoid major service disruptions. We recommended that CMS implement 6 management controls and 22 information security controls to help ensure that the systems and information related to Healthcare.gov are protected. The management recommendations were aimed at ensuring system security plans were complete, privacy risks were analyzed and documented, computer matching agreements were developed with the Office of Personnel Management and the Peace Corps, a comprehensive security assessment of the federally facilitated marketplace was performed, the planned alternate processing site made operational in a timely fashion, and detailed security roles and responsibilities for contractors were established. HHS concurred fully or partially concurred with our information security program-related recommendations and all 22 of the recommendations to improve the effectiveness of its information security controls. As of December 2015, CMS had taken steps to address all 6 information security program-related recommendations and was in the process of addressing the security control-related recommendations. In March 2015, we reported that several problems with the initial development and deployment of Healthcare.gov and its supporting systems had led to consumers encountering widespread performance issues when trying to create accounts and enroll in health plans. We noted, for example, that CMS had not adequately conducted capacity planning, adequately corrected software coding errors, or implemented all planned functionality. In addition, the agency did not consistently apply recognized best practices for system development, which contributed to the problems with the initial launch of Healthcare.gov and its supporting systems. In this regard, weaknesses existed in the application of requirements, testing, and oversight practices. Further, we noted that HHS had not provided adequate oversight of the Healthcare.gov initiative through its Office of the Chief Information Officer. We made recommendations aimed at improving requirements management, system testing processes, and oversight of development activities for systems supporting Healthcare.gov. HHS concurred with all of our recommendations and subsequently took or planned steps to address the weaknesses, including instituting a process to ensure functional and technical requirements are approved, developing and implementing a unified standard set of approved system testing documents and policies, and providing oversight for Healthcare.gov and its supporting systems through the department-wide investment review board. In September 2015, we reported that CMS established a framework for oversight of IT projects within state-based marketplaces, but the oversight was not always effectively executed. For example, CMS tasked various offices with responsibilities for overseeing states’ marketplace IT projects, but the agency did not always clearly document, define, or communicate its oversight roles and responsibilities to states as called for by best practices for project management. In addition, CMS did not involve all relevant senior executives in decisions to approve federal funding for states’ IT marketplace projects. Lastly, CMS established a process that required the testing of state marketplace systems to determine whether they were ready to be made operational, but the systems were not always fully tested, increasing the risk that they would not operate as intended. We recommended that CMS define and communicate its oversight roles and responsibilities, ensure senior executives are involved in funding decisions for state IT projects, and ensure that states complete testing of their systems before they are put into operation. HHS concurred with all of our recommendations and stated it had taken various actions that were focused on improving its oversight and accountability for states’ marketplace efforts. Most recently, in February 2016, we reported that CMS should take actions to strengthen enrollment controls and manage fraud risk. We noted, for example, CMS does not, according to agency officials, track or analyze aggregate outcomes of data hub eligibility and enrollment queries—either the extent to which a responding agency delivers information responsive to a request, or whether an agency reports that information was not available. In addition, CMS did not have an effective process for resolving inconsistencies for individual applicants for the federal Health Insurance Marketplace. Lastly, CMS approved subsidized coverage for 11 of 12 fictitious GAO phone or online applicants for 2014 and the applicants obtained a total of about $30,000 in annual advance premium tax credits, plus eligibility for lower costs at time of service. We made 8 recommendations aimed at strengthening enrollment controls and managing fraud risk, including that CMS consider analyzing outcomes of the verification system, take steps to resolve inconsistencies, and conduct a risk assessment of the potential for fraud in Marketplace applications. HHS concurred with all of GAO’s recommendations. NIST defines an information security incident as a violation or imminent threat of violation of computer security policies, acceptable use policies, or standard security practices. A security incident can occur under many circumstances and for many reasons. It can be inadvertent, such as from the loss of an electronic device, or deliberate, such as from the theft of a device, or a cyber-based attack by a malicious individual or group, agency insider, foreign nation, terrorist, or other adversary. Protecting federal systems and the information on them is essential because the loss or unauthorized disclosure or alteration of the information can lead to serious consequences and can result in substantial harm to individuals and the federal government. FISMA requires the establishment of a federal information security incident center to, among other things, provide timely technical assistance to agencies regarding cyber incidents. The United States Computer Emergency Readiness Team (US-CERT), established in 2003, is the federal information security incident center that fulfills the FISMA mandate. US-CERT consults with agencies on cyber incidents, provides technical information about threats and incidents, compiles the information, and publishes it on its website, https://www.us-cert.gov/. US- CERT also issues guidelines for agencies to use when reporting incidents. For the time period under our review, US-CERT defined seven categories of incidents for federal agencies to use in reporting incidents, and CMS added two categories of its own, which are described below in table 1. Between October 6, 2013, and March 8, 2015, CMS reported 316 incidents affecting Healthcare.gov or key supporting systems. These included—among others—incidents which involved PII and attempts by attackers to compromise part of the Healthcare.gov system. None of the incidents described in the data included any evidence that an attacker had compromised sensitive data, including PII, from Healthcare.gov. Figure 4 shows the 316 reported incidents grouped according to the US- CERT and CMS-defined incident categories. CAT 1 unauthorized access incidents made up 17 percent of the incidents logged during the time period under review. Of those, only one incident— which CMS publicly disclosed last year—involved a confirmed instance of an attacker gaining access to a Healthcare.gov-related server. In that incident, the attacker installed malware on a test server that held no PII. The rest of the CAT 1 incidents involved occurrences such as PII being disclosed because of physical mail being sent to an incorrect recipient or unencrypted PII being transmitted via e-mail to a limited number of individuals. CMS also assessed incidents’ impact, categorizing incidents as having an impact of “Extensive/Widespread,” “Significant/Large,” “Moderate/Limited,” or “Minor/Localized.” More than 98 percent of the reported incidents were assessed as “Moderate/Limited” impact, and the remainder, less than 2 percent, as “Minor/Localized” impact. See figure 5 for a breakdown of incidents by CMS-assigned level of impact. CMS did not classify any of the incidents we reviewed as having “Extensive/Widespread” impact, and classified only one incident as having “Significant/Large” impact. In that incident, a list of CMS employee account IDs, including passwords that had not yet been assigned to employees and phone numbers, was transmitted to CMS staff via an unencrypted e-mail message. In order to mitigate the incident, CMS created new passwords for the affected employees and advised the employees to log on and change their passwords. A privacy incident generally refers to the unauthorized or unintentional exposure, disclosure, or loss of sensitive information, including PII. According to CMS, 41 of the 316 incidents were reported to involve PII either not being secured properly or being exposed to an unauthorized individual, as opposed to other security issues affecting Healthcare.gov and key supporting systems. Of the 41 PII incidents in the CMS data, the agency classified 40 as being of “Moderate/Limited” impact, and one as being of “Minor/Localized” impact. The number of individuals affected by these incidents was not fully documented. While CMS, as of October 2014, began including an estimate of the number of affected individuals in incident reports, several of the reports we reviewed were from earlier incidents and did not contain estimates of the number of affected individuals. See figure 6 for a breakdown of the privacy incidents by CMS- assigned level of impact. As noted above, none of these incidents were the result of an attacker compromising data, but were rather the result of errors such as information being sent to the incorrect recipient, PII being transmitted in an unencrypted format, or system configuration errors causing PII to be recorded to system logs or displayed in places it should not have been. A basic management objective for any organization is to protect the confidentiality, integrity, and availability of the information and systems that support its critical operations and assets. Organizations accomplish this by designing and implementing access and other controls that are intended to protect information and systems from unauthorized disclosure, modification, and loss. Specific controls include, among other things, those related to identification and authentication of users, authorization restrictions, and configuration management. As required by FISMA, NIST has issued guidance for agencies on how to select and implement controls over their information systems. Additionally, in June 2015, OMB directed agencies to take steps to strengthen their controls in the areas of scanning and monitoring for attackers, patching vulnerabilities in a timely manner, limiting the use of administrative accounts, and requiring the use of two-factor authentication, especially for administrators. As we previously reported, CMS took steps to protect the security and privacy of data processed and maintained by the complex set of systems and interconnections that support Healthcare.gov, including the data hub. The steps included developing required security program policies and procedures, establishing interconnection security agreements with its federal and commercial partners, and instituting required privacy protections. For example, it assigned overall responsibility for securing the agency’s information and systems to appropriate officials, including the agency Chief Information Officer and Chief Information Security Officer, and designated information system security officers to assist in certifying information systems of particular CMS components. Additionally, CMS documented information security policies and procedures to safeguard the agency’s information and systems and to reduce the risk of and minimize the effects of security incidents. While CMS has taken steps to secure the data hub, we identified weaknesses in the technical controls protecting the data flowing through the system. Specifically, CMS did not effectively implement or securely configure key security tools and devices to sufficiently protect the users and information on the data hub system from threats to confidentiality, integrity, and availability. For example: CMS did not appropriately restrict the use of administrative privileges for data hub systems. NIST Special Publication 800-53 recommends that agencies follow the concept of “least privilege,” giving users and administrators only the privileges and access necessary to perform their assigned duties. OMB has also instructed agencies to tighten policies and procedures for privileged users, including limiting the functions privileged users can perform with their administrative accounts. However, CMS did not consistently restrict administrator accounts to perform only the functions necessary to perform their assigned duties. CMS officials stated they are working to further restrict administrative privileges and are reviewing accounts to ensure permissions and roles are appropriate. By not enforcing least privilege, CMS faces an increased risk that a malicious insider or an attacker using a compromised administrator account could access sensitive data flowing through the data hub. CMS did not consistently implement patches for several data hub systems. NIST Special Publication 800-53 recommends that organizations test and install newly released security patches, service packs, and hot fixes, and OMB has instructed agencies to patch critical vulnerabilities without delay. However, CMS did not consistently apply patches to critical systems or applications supporting the data hub in a timely manner. CMS officials stated they are reviewing the patch histories on all servers and are directing staff to bring them up-to-date or provide a business rationale for not applying specific patches. By not keeping current with security patches, CMS faces an increased risk that servers supporting the data hub could be compromised through exploitation of known vulnerabilities. CMS did not securely configure the data hub’s administrative network. NIST Special Publication 800-53 recommends how such a network should be configured. CMS officials stated that they are reviewing the network’s configurations to identify a plan for remediation. Without adhering to NIST recommendations, CMS may face an increased risk of unauthorized access to the data hub network. In addition to the above weaknesses, we identified other security weaknesses in controls related to boundary protection, identification and authentication, authorization, encryption, audit and monitoring, and software updates that limit the effectiveness of the security controls on the data hub and unnecessarily place sensitive information at risk of unauthorized disclosure, modification, or exfiltration. According to CMS officials, in response to the identified weaknesses, they have formed a task force, comprised of the Deputy Chief Information Security Officer, system maintainers and administrators, database administrators, and security personnel, to work with the stakeholders responsible for the data hub applications and the underlying platform and infrastructure. The same officials stated that meetings will be held on at least a weekly basis to monitor milestone dates, discuss activities, and identify potential barriers to resolution of any given weakness. The control weaknesses we identified during this review are described in greater detail in a separate report with limited distribution. CMS has taken various actions to oversee the security and privacy controls implemented at the state-based marketplaces, including assigning roles and responsibilities for oversight entities, conducting regular meetings with state officials to discuss pending issues, and establishing a new reporting tool to monitor marketplace performance. However, CMS has not fully documented procedures that define its oversight responsibilities. Further, while CMS has set requirements for annual testing of a subset of security controls implemented within the state-based marketplaces, it does not require continuous monitoring or annual comprehensive testing. Until CMS documents its oversight procedures and requires continuous monitoring of security controls, it does not have reasonable assurance that the states are promptly identifying and remediating weaknesses and therefore faces a higher risk that attackers could compromise the confidentiality, integrity, and availability of the data contained in state-based marketplaces. The need for better assurance that controls are working was highlighted by the results of the reviews we conducted of security and privacy controls at three state-based marketplaces. For those three marketplaces, we identified significant weaknesses that placed the data they contained at risk of compromise. Effective organizational policies and procedures define key management activities in detail, establish time frames for their completion, and specify follow-up actions that must be taken to correct deficiencies. According to GAO’s Standards for Internal Control in the Federal Government, an organization’s policies should identify internal control responsibilities and each unit’s responsibility for designing and implementing those controls. Moreover, each policy should specify the appropriate level of detail to allow management to effectively monitor the control activities and define day-to-day procedures, which may include the timing of when an activity is to occur and any follow-up corrective actions to be performed if deficiencies are identified. While CMS has developed policies for overseeing security and privacy controls at the state-based marketplaces, it has not defined specific oversight procedures, the timing for when each activity should occur, or what follow-up corrective actions should be performed if deficiencies are identified. CMS has assigned roles and responsibilities for oversight entities, conducted regular meetings with state officials to discuss pending issues, and established a new reporting tool to monitor marketplace performance. For example, as we reported in September 2015, CMS outlined oversight roles and responsibilities. Three key offices—CCIIO, Office of Technology Solutions (OTS), and Center for Medicaid and CHIP Services (CMCS)—were identified as having responsibility for overseeing states’ efforts in establishing the marketplaces. Their primary roles and duties included the following: CCIIO led the marketplace implementation, and within that office, State Officers were assigned to be accountable for day-to-day communications with state marketplace officials. OTS was responsible for systems integration and software development efforts to ensure that the functions of the marketplaces were carried out. A primary participant within OTS was the IT project manager, who was the individual responsible for monitoring, among other things, state-based marketplaces’ IT development activities. CMCS was the office responsible for coordinating and approving implementation of Medicaid activities related to the health insurance marketplaces. The office carried out these responsibilities in conjunction with CCIIO. While CMS outlined general oversight roles, it did not define or document the specific day-to-day activities of these offices and staff that are responsible for the oversight. For example, according to CCIIO officials, the state officers conduct oversight through weekly meetings with state- based marketplace officials. The same officials stated that the meetings do not have a defined agenda or procedures, but that identified control weaknesses or other security issues are discussed. Further, there are no documented procedures that outline the specific responsibilities of the IT project manager, who was the individual responsible for monitoring state- based marketplaces’ IT development activities. In 2015, CMS began using a new reporting tool to monitor state performance. The State Based Marketplace Annual Reporting Tool (SMART) is intended to collect information to be used as the basis for evaluating a state-based marketplace’s compliance with regulations and CMS standards. Information collected through SMART includes performance metrics, summaries from independent programmatic audits, and an attestation to the submission of the most recent required security and privacy documentation. The first submissions from the states were due on April 1, 2015. According to CMS officials, they received the submissions and, as of December 2015, were still reviewing them. While SMART is intended to collect information on compliance with regulations and CMS standards, including security and privacy controls, CMS has not defined specific follow-up procedures or time frames, including identifying corrective actions to be performed if deficiencies are identified. CMS officials stated SMART is a reporting mechanism used to provide a comprehensive picture of state-based marketplaces and that CMS does not use it to identify corrective actions to be performed if deficiencies are identified. However, until CMS defines and documents its specific day-to-day procedures, the timing of when control activities are to occur, and what follow-up corrective actions are to be performed if deficiencies are identified, the agency does not have reasonable assurance that it is providing effective oversight of security and privacy at state-based marketplaces. FISMA requires that an agency develop, document, and implement an agency-wide information security program. The program should provide security for the information and information systems that support the operations of the agency, including those provided or managed by a contractor or other source. As part of the information security program, the agency should require periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency depending on risk, but no less than annually. FISMA requires this testing to be comprehensive, including testing of management, operational, and technical controls of every information system identified in the inventory. Further, in November 2013 OMB issued guidance to federal agencies on managing information security risk on a continuous basis, which includes the requirement to continually monitor the security controls in information systems and the environments in which they operate. OMB noted that managing information risk on a continuous basis allows agencies to maintain awareness of information security vulnerabilities and threats to support risk management decisions and improve the effectiveness of safeguards and countermeasures. Rather than enforcing a static, point-in- time reauthorization process, agencies were encouraged by OMB to conduct ongoing authorizations of their information systems and the environments in which they operated, including common controls, through the implementation of their risk management programs. Although CMS has set requirements for periodic testing of the security controls at the state-based marketplaces, it requires neither continuous monitoring nor comprehensive annual testing. Any state seeking to gain an “authority to connect” to the data hub is required to submit documentation that it has properly secured its planned connection. The standard “authority to connect” to the data hub is issued for a 3-year period. Following the approval of the initial “authority to connect,” every state is required to conduct reviews of the documentation on a yearly basis, submit quarterly plan of action and milestone reports, and re-sign the interconnection security agreement every 3 years or whenever a significant change has occurred to the interconnected systems. As part of the signed agreement, each state must specify the security controls it has implemented and attest that the state IT system is designed, managed, and operated in compliance with CMS standards. According to the MARS-E, all security controls are required to be assessed over a 3-year period and to meet this requirement a subset is to be tested each year so that all security controls are tested during a 3-year period. However, according to CMS officials, during the time of our review, the states were not required to submit evidence that they had tested subsets of controls each year. CMS officials stated that they monitor the effectiveness of security controls on an ongoing basis by reviewing documents that contain information on reported weaknesses. The same officials stated that they perform quarterly reviews of state marketplaces’ plan of action and milestone reports, and changes to the system boundaries, hardware, software, and data centers. These officials added that if serious deficiencies are noted in their review, such as a large number of open high or moderate findings, or findings that have been open for a long time, they have the ability to terminate a state’s connection to the data hub if the deficiencies are not remediated or sufficient progress is not made in a timely manner. However, according to CMS officials, they have not yet terminated any state’s connection to the data hub because states have remediated deficiencies to their satisfaction in a timely manner. Numerous significant security weaknesses have been identified in state- based marketplaces. For example, in the second quarter of fiscal year 2015, the 14 states that maintained their own state-based marketplaces reported a total of 27 high open findings, 288 moderate open findings, and 259 low open findings from their own internal assessments. One state reported 20 of the 27 high open findings during that time period. According to CMS officials, while they do not require comprehensive annual testing or continuous monitoring of security controls, they perform annual reviews of the system security plans for the state-based marketplaces and require the states to submit new security assessments anytime they make significant changes to the systems. CMS officials also stated that they monitor various state-generated documents on a weekly, monthly, or yearly basis depending on when the reports are being required. States are advised to include any new assessment, audit, or weakness discovered during normal day-to-day operations in those documents. However, for the plan of action and milestones reports and state-based marketplaces we reviewed, the CMS oversight process has not resulted in timely identification and mitigation of security weaknesses. Without more frequently monitoring of the full set of security controls in the state-based marketplaces and the environments in which they operate, CMS does not have reasonable assurance that the states are promptly identifying and remediating weaknesses and therefore faces a higher risk that attackers could compromise the confidentiality, integrity, and availability of the data contained in state-based marketplaces. The need for better assurance that security and privacy controls are working properly was highlighted by the results of our reviews of technical controls at three state-based marketplaces, which identified significant weaknesses in those systems. In September 2015, we reported on our reviews of three state-based marketplaces that assessed the effectiveness of key program elements and controls implemented to protect the information they contain. We identified weaknesses in key elements of each state’s information security and privacy controls, such as security management, privacy policies and procedures, security awareness training, background checks, contingency planning, incident response, and configuration management. Further, we identified security weaknesses in technical controls related to access controls, cryptography, and configuration management that limit the effectiveness of the security controls on the systems. For example: One state did not encrypt connections to the authentication servers supporting its system. The MARS-E requires passwords to be encrypted when they are being transmitted across the network. However, the authentication servers we reviewed were configured to accept unencrypted connections. As a result, an attacker on the network could observe the unencrypted transmission to gather usernames and password hashes, which could then be used to compromise those accounts. One state did not filter uniform resource locator (URL) requests from the Internet through a web application firewall to prevent hostile requests from reaching the marketplace website. NIST Special Publication 800-53 requires the enforcement of access controls through the use of firewalls. However, the state did not fully configure its filtering to block hostile URL requests from the Internet. As a result, hostile URL requests could potentially scan and exploit vulnerabilities of the portal and potentially gain access to remaining systems and databases of the marketplace. One state did not enforce the use of high-level encryption on its Windows servers. NIST Special Publication 800-53 and MARS-E require that if an agency uses encryption, it must use, at a minimum, a Federal Information Processing Standards 140-2–compliant cryptographic module. However, the state did not configure its Windows Active Directory and Domain Name System servers to require the use of Federal Information Processing Standards– compliant algorithms. As a result, the servers may employ weak encryption for protecting authentication and communication, increasing the risk that an attacker could compromise the confidentiality or integrity of the system. For each of the security and privacy weaknesses we identified, we also identified potential activities to mitigate those weaknesses. In total, we identified 24 potential mitigation activities to address weaknesses in the three states’ security and privacy programs and 66 potential mitigation activities to improve the effectiveness of their information security controls. The results of our work were reported separately in “limited official use only” correspondences. The three states generally agreed with the potential mitigation activities and have plans to address them. Healthcare.gov and its key supporting systems have experienced information security incidents which involved both PII not being secured properly and attempts by attackers to compromise the Healthcare.gov system. However, for the incidents we reviewed, we did not find evidence that an outside attacker with malicious intent had compromised sensitive data. Although CMS continues to make progress in correcting or mitigating previously reported weaknesses within Healthcare.gov and its key supporting systems, the information security weaknesses found in the data hub will likely continue to jeopardize the confidentiality, integrity, and availability of Healthcare.gov. The information that is transferred through the data hub will likely remain vulnerable until the agency addresses weaknesses pertaining to boundary protection, identification and authentication, authorization, encryption, audit and monitoring, software updates, and configuration management. While CMS has taken steps to ensure that the information processed and maintained by stated-based marketplaces is protected from unauthorized access or misuse, it lacks a documented oversight program to ensure that each state is implementing security and privacy controls properly. Given the significant number of control weaknesses found during our review of selected states, CMS not requiring continuous monitoring of security controls at the state level may pose unnecessary and increased security risks to the data hub and other Healthcare.gov systems. To improve the oversight of privacy and security controls over the state- based marketplaces, we recommend that the Secretary of Health and Human Services direct the Administrator of the Centers for Medicare & Medicaid Services to take the following three actions: define procedures for overseeing state-based marketplaces, to include day-to-day activities of the relevant offices and staff; develop and document procedures for reviewing the SMART tool, including specific follow-up timelines and identifying corrective actions to be performed if deficiencies are identified; and require continuous monitoring of the privacy and security controls over state-based marketplaces and the environments in which those systems operate to more quickly identify and remediate vulnerabilities. In a separate report with limited distribution, we are also making 27 recommendations to resolve technical information security weaknesses within the data hub related to boundary protection, identification and authentication, authorization, encryption, audit and monitoring, and software updates. We sent draft copies of this report to the Department of Health and Human Services (HHS) and received written comments in return. These comments are reprinted in appendix II. HHS concurred with all of GAO’s recommendations. Further, it also provided information regarding specific actions the agency has taken or plans on taking to address these recommendations. We also received technical comments from HHS, which have been incorporated into the final report as appropriate. In its written comments, HHS noted that the department and its federal partners comply with relevant laws and use processes, controls, and standards to secure consumer data maintained within Healthcare.gov and its supporting systems. Further, it described the process it uses to mitigate information security risks associated with the data hub, manage security incidents, and oversee the security and privacy of data transmitted by the state-based marketplaces. We are sending copies of this report to the Department of Health and Human Services. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have questions about this report, please contact Gregory C. Wilshusen at (202) 512-6244 or Dr. Nabajyoti Barkakati at (202) 512-4499. We can also be reached by e-mail at [email protected] and [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to (1) describe the extent to which security and privacy incidents were reported for Healthcare.gov or key supporting systems; (2) assess the effectiveness of the controls implemented by the Centers for Medicare & Medicaid Services (CMS) to protect the Federal Data Services Hub (data hub) and the information it transmits; (3) assess the effectiveness of CMS’s oversight of key program elements and controls implemented by state-based marketplaces and the effectiveness of those elements at selected state-based marketplaces to protect the information they contain. To address our first objective, we reviewed and analyzed data on information security and privacy incidents reported by CMS that occurred between October 6, 2013, and March 8, 2015, affecting Healthcare.gov and its supporting systems. Specifically, we reviewed a list of reported incidents and the information associated with each incident, such as the incident reports and actions taken to mitigate the incidents. We also reviewed the reported impact of each incident. In order to ensure the reliability of the data, we reviewed related documentation, interviewed knowledgeable agency officials, and performed manual data testing for obvious errors. We then analyzed the information to identify statistics on the reported incidents. Lastly, we interviewed knowledgeable officials and reviewed CMS policies and procedures for incident handling. To address our second objective, we reviewed relevant information security laws and National Institute of Standards and Technology (NIST) standards and guidance to identify federal security and privacy control requirements. Further, we analyzed the overall network control environment, identified interconnectivity and control points, and reviewed controls for the network and servers supporting the data hub. Specifically, we reviewed controls over the data hub and its supporting software, the operating systems, network, and computing infrastructure provided by the supporting platform-as-a-service. In order to evaluate CMS’s controls over its information systems supporting Healthcare.gov, we used our Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized information; Office of Management and Budget (OMB) guidance; NIST standards and guidelines; and CMS policies, procedures, practices, and standards. reviewed network access paths to determine if boundaries had been adequately protected; analyzed system access controls to determine whether users had more permissions than necessary to perform their assigned functions; observed configurations for providing secure data transmissions across the network to determine whether sensitive data were being encrypted; reviewed software security settings to determine if modifications of sensitive or critical system resources had been monitored and logged; and inspected the operating system and application software on key servers and workstations to determine if critical patches had been installed and/or were up-to-date. We performed our work at CMS contractor facilities in Columbia, Maryland, and Chantilly, Virginia. To address our third objective, we selected three states by concentrating on states who received a high amount of federal grant funding through 2014, while ensuring a mix of both population size (I.e., large, medium, and small) and contractors used to ensure we reviewed a variety of approaches to system development and operation. To assess the effectiveness of the three selected states’ key program elements and management controls, we compared their documented policies, procedures, and practices to the provisions and requirements contained in CMS security and privacy standards for state-based marketplaces. We also reviewed the results of testing of security controls; analyzed system and security documentation, including information exchange agreements; and interviewed state officials. To determine the effectiveness of the information security controls the three states implemented for information systems supporting their marketplaces, we reviewed risk assessments, security plans, system control assessments, contingency plans, and remedial action plans. To evaluate the technical controls for the marketplaces, we analyzed the overall network control environment, identified control points, and reviewed controls for the supporting network and servers. We compared the aforementioned items to our Federal Information System Controls Audit Manual; NIST standards and guidelines; CMS security and privacy guidance for state-based marketplaces; and Center for Internet Security guidance. To determine the effectiveness of CMS oversight of the states’ program elements and controls, we reviewed CMS policies and procedures regarding oversight of the state-based marketplaces and compared them to Federal Information Security Modernization Act of 2014 requirements, OMB guidance on security controls testing, and GAO’s Standards for Internal Control in the Federal Government. We also obtained and reviewed oversight-related information that CMS provided to the three selected states. Lastly, we interviewed officials from the relevant CMS offices that had oversight responsibilities. We conducted this performance audit from December 2014 to March 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contacts named above, John de Ferrari, Edward Alexander Jr., Lon Chin, West Coile and Duc Ngo (assistant directors); Christopher Businsky; Mark Canter; Marisol Cruz; Lee McCracken; Monica Perez-Nelson; Justin Palk; Michael Stevens; and Brian Vasquez made key contributions to this report.
The Patient Protection and Affordable Care Act required the establishment of health insurance marketplaces in each state to allow consumers to compare, select, and purchase health insurance plans. States establishing their own marketplaces are responsible for securing the supporting information systems to protect sensitive personal information they contain. CMS is responsible for overseeing states' efforts, as well as securing federal systems to which marketplaces connect, including its data hub. GAO was asked to review security issues related to the data hub, and CMS oversight of state-based marketplaces. Its objectives were to (1) describe security and privacy incidents reported for Healthcare.gov and related systems, (2) assess the effectiveness of security controls for the data hub, and (3) assess CMS oversight of state-based marketplaces and the security of selected state-based marketplaces. GAO reviewed incident data, analyzed networks and controls, reviewed policies and procedures, and interviewed CMS and marketplace officials. This is a public version of a limited official use only report that GAO issued in March 2016. Sensitive information on technical issues has been omitted from this version. The Centers for Medicare & Medicaid Services (CMS) reported 316 security-related incidents, between October 2013 and March 2015, affecting Healthcare.gov—the web portal for the federal health insurance marketplace—and its supporting systems. According to GAO's review of CMS records for this period, the majority of these incidents involved such things as electronic probing of CMS systems by potential attackers, which did not lead to compromise of any systems, or the physical or electronic mailing of sensitive information to an incorrect recipient. None of the incidents included evidence that an outside attacker had successfully compromised sensitive data, such as personally identifiable information. Consistent with federal guidance, CMS has taken steps to protect the security and privacy of data processed and maintained by the systems and connections supporting Healthcare.gov, including the Federal Data Services Hub (data hub). The data hub is a portal for exchanging information between the federal marketplace and CMS's external partners. To protect these systems, CMS assigned responsibilities to appropriate officials and documented information security policies and procedures. However, GAO identified weaknesses in technical controls protecting the data flowing through the data hub. These included insufficiently restricted administrator privileges for data hub systems, inconsistent application of security patches, and insecure configuration of an administrative network. GAO also identified additional weaknesses in technical controls that could place sensitive information at risk of unauthorized disclosure, modification, or loss. In a separate report, with limited distribution, GAO recommended 27 actions to mitigate the identified weaknesses. In addition, while CMS has taken steps to oversee the security and privacy of data processed and maintained by state-based marketplaces, improvements are needed. For example, CMS assigned roles and responsibilities to various oversight entities, met regularly with state officials, and developed a reporting tool to monitor performance. However, it has not defined specific oversight procedures, such as the timing for when each activity should occur, or what follow-up corrective actions should be performed if deficiencies are identified. Further, CMS does not require sufficiently frequent monitoring of the effectiveness of security controls for state-based marketplaces, only requiring testing once every 3 years. GAO identified significant weaknesses in the controls at three selected state-based marketplaces. These included insufficient encryption and inadequately configured firewalls, among others. In September 2015, GAO reported these results to the three states, which generally agreed and have plans in place to address the weaknesses. Without well-defined oversight procedures and more frequent monitoring of security controls, CMS has less assurance that state-based marketplaces are adequately protected against risks to the sensitive data they collect, process, and maintain. GAO is recommending that CMS define procedures for overseeing the security of state-based marketplaces and require continuous monitoring of state marketplace security controls. HHS concurred with GAO's recommendations.
You are an expert at summarizing long articles. Proceed to summarize the following text: Amtrak was created by the Rail Passenger Service Act of 1970 to operate and revitalize intercity passenger rail service. Prior to its creation, intercity passenger rail service was provided by private railroads, which had continually lost money, especially after World War II. The Congress gave Amtrak specific goals, including providing modern, efficient intercity passenger service; helping to alleviate the overcrowding of airports, airways, and highways; and giving Americans an alternative to automobiles and airplanes to meet their transportation needs. Through fiscal year 1997, the federal government has invested over $19 billion in Amtrak. (Appendix I shows federal appropriations for Amtrak since fiscal year 1988.) In response to continually growing losses and a widening gap between operating deficits and federal operating subsidies, Amtrak developed its Strategic Business Plan. This plan (which has been revised several times) was designed to increase revenues and control cost growth and, at the same time, eliminate Amtrak’s need for federal operating subsidies by 2002. Amtrak also restructured its organization into strategic business units: the Northeast Corridor Unit, which is responsible for operations on the East Coast between Virginia and Vermont; Amtrak West, for operations on the West Coast; and the Intercity Unit, for all other service, including most long-distance, cross-country trains. Amtrak is still in a financial crisis despite the fact that its financial performance (as measured by net losses) has improved over the last 2 years. At the end of fiscal year 1994, Amtrak’s net loss was about $1.1 billion (in 1996 dollars). This loss was $873 million if the one-time charge of $255 million, taken in fiscal year 1994 for accounting changes, restructuring costs, and other items, is excluded. By the end of fiscal year 1996, this loss had declined to about $764 million. However, the relative gap between total revenues and expenses has not significantly closed, and passenger revenues (adjusted for inflation)—which Amtrak has been relying on to help close the gap—have generally declined over the past several years (see apps. II and III). More importantly, the gap between operating deficits and federal operating subsidies has again begun to grow. Amtrak continues to be heavily dependent on federal operating subsidies to make ends meet. Although operating deficits have declined, they have not gone down at the same rate as federal operating subsidies (see app. IV). At the end of fiscal year 1994, the gap between Amtrak’s operating deficit and federal operating subsidies was $75 million. At the end of fiscal year 1996, the gap had increased to $82 million. Over this same time, federal operating subsidies went from $502.2 million to $405 million. Amtrak’s continuing financial crisis can be seen in other measures as well. In February 1995, we reported that Amtrak’s working capital—the difference between current assets and current liabilities—declined between fiscal years 1987 and 1994. Although Amtrak’s working capital position improved in fiscal year 1995, it declined again in fiscal year 1996 to a $195 million deficit (see app. V). This reflects an increase in accounts payable and short-term debt and capital lease obligations, among other items. As we noted in our 1995 report, a continued decline in working capital jeopardizes Amtrak’s ability to pay immediate expenses. Amtrak’s debt levels have also increased significantly (see app. VI). Between fiscal years 1993 and 1996, Amtrak’s debt and capital lease obligations increased about $460 million—from about $527 million to about $987 million (in 1996 dollars). According to Amtrak, this increase was to finance the delivery of new locomotives and Superliner and Viewliner cars—a total of 28 locomotives and 245 cars delivered between fiscal years 1994 and 1996. These debt levels do not include an additional $1 billion expected to be incurred to finance 18 high-speed trainsets due to begin arriving in fiscal year 1999 and related maintenance facilities for the Northeast Corridor (at about $800 million) and the acquisition of 98 new locomotives (at about $250 million). It is important to note that Amtrak’s increased debt levels could limit the use of federal operating support to cover future operating deficits. As Amtrak’s debt levels have increased, there has also been a significant increase in the interest expenses that Amtrak has incurred on this debt (see app. VII). In fact, over the last 4 years, interest expenses have about tripled—from about $20.6 million in fiscal year 1993 to about $60.2 million in fiscal year 1996. This increase has absorbed more of the federal operating subsidies each year because Amtrak pays interest from federal operating assistance and principal from federal capital grants. Between fiscal years 1993 and 1996, the percentage of federal operating subsidies accounted for by interest expense has increased from about 6 to about 21 percent. As Amtrak assumes more debt to acquire equipment, the interest payments are likely to continue to consume an increasing portion of federal operating subsidies. The implementation of the strategic business plans appears to have helped Amtrak’s financial performance—as evidenced by the reduction in net losses between fiscal years 1994 and 1996 (from about $873 million to about $764 million). As we reported in July 1996, about $170 million in cost reductions came in fiscal year 1995 by reducing some routes and services, cutting management positions, and raising fares. Amtrak projected that these actions would reduce future net losses by about $315 million annually once they were in place. The net loss was reduced in fiscal year 1996 as total revenues increased more than total expenses did. In contrast, Amtrak estimates that its net loss in fiscal year 1996 would have been about $1.1 billion if no actions had been taken to address its financial crisis in 1994. Although the strategic business plans have helped reduce the net losses, targets for these losses have often been missed. To illustrate, Amtrak’s plans for fiscal years 1995 and 1996 included actions to reduce the net losses by $195 million—from about $834 million in 1994 (in current year dollars) to $639 million in fiscal year 1996. This reduction was to be accomplished, in part, by increasing revenues $191 million while holding expenses at about the 1994 level. However, actual net losses for this period totaled about $1.572 billion, or about $127 million more than the $1.445 billion Amtrak had planned. This difference was primarily due to the severe winter weather in fiscal year 1996—a contingency that Amtrak had not planned for and one that added about $29 million to expenses—and the unsuccessful implementation of various elements of the fiscal year 1996 business plan. For example, many of the productivity improvements (such as reducing the size of train crews) that Amtrak had planned in fiscal year 1996 were not achieved. As a result, cost savings fell short of Amtrak’s $108 million target by about $60 million. As we reported in July 1996, Amtrak has made little progress in negotiating new productivity improvements with its labor unions. For fiscal year 1997, as a result of higher than anticipated losses and an expected accounting adjustment, Amtrak planned for a net loss of $726 million. However, after the first quarter of operations, revenues were below target, and although expenses were lower than expected, the operating deficit was almost $4 million more than planned for that quarter. Furthermore, fiscal year 1997 financial results will be affected by the postponement of route and service adjustments planned for November 1996. Amtrak estimates that postponing these adjustments will bring a net revenue reduction of $6.9 million and a net cost increase of $29.2 million. Part of this increased cost will be offset by an additional federal operating grant of $22.5 million made to keep these routes operating. In part, as a result of these increased costs, Amtrak revised its planned fiscal year 1997 net loss upward to $762 million from the originally projected $726 million. Even that might not be achieved. As a result of additional unanticipated expenses and revenue shortfalls, Amtrak projects its actual fiscal year 1997 year-end net loss could be about $786 million. Amtrak’s projected fiscal year 1997 financial results may also affect its cash flow and the need to borrow money to make ends meet. For example, in January 1997, Amtrak projected a cash flow deficit of about $96 million at the end of fiscal year 1997—about $30 million more than planned. This deficit may require Amtrak to begin borrowing as early as March 1997 to pay their bills. Moreover, the cash flow deficit may be even larger than projected if Amtrak does not receive anticipated revenues from the sale of property ($16 million) and cost savings from lower electric power prices in the Northeast Corridor ($20.5 million). Amtrak’s fiscal year 1998 projected year-end cash balance is also bleak. On the basis of current projections, Amtrak estimates that it may have to borrow up to $148 million next year. Amtrak currently has short-term lines of credit of $150 million. Amtrak’s need for capital funds remains high. We reported in June 1996 that Amtrak will need billions of dollars to address its capital needs, such as bringing the Northeast Corridor up to a state of good repair. This situation largely continues today. In May 1996, the Federal Railroad Administration (FRA) and Amtrak estimated that about $2 billion would be needed over the next 3 to 5 years to recapitalize the south end of the corridor and preserve its ability to operate in the near term at existing service levels. This renovation would include making improvements in the North and East river tunnels serving New York City and restoring the system that provides electric power to the corridor. This system, with equipment designed to last 40 to 50 years, is now between 60 and 80 years old, and, according to FRA and Amtrak, has gotten to the point at which it no longer allows Amtrak and others to provide reliable high-speed and commuter service. FRA and Amtrak believe that this capital investment of about $2 billion would help reverse the trend of adding time to published schedules because of poor on-time performance. Over the next 20 years, FRA and Amtrak estimate, up to $6.7 billion may be needed to recapitalize the corridor and make improvements targeted to respond to high-priority growth opportunities. A significant capital investment will also be required for other projects as well. For example, additional capital assistance will be required to introduce high-speed rail service between New York and Boston. In 1992, the Amtrak Authorization and Development Act directed that a plan be developed for regularly scheduled passenger rail service between New York and Boston in 3 hours or less. Currently, such trips take, on average, about 4-1/2 hours. Significant rehabilitation of the existing infrastructure as well as electrification of the line north of New Haven, Connecticut, will be required to accomplish this goal. According to Amtrak, since fiscal year 1991 the federal government has invested about $900 million in the high-speed rail program, and an additional $1.4 billion will be required to complete the project. A significant capital investment will also be required to acquire new equipment and overhaul existing equipment. Amtrak plans to spend about $1.7 billion over the next 6 years for these purposes. We reported in July 1996 and February 1995 on Amtrak’s need for capital investments and some of the problems being experienced as a result. We noted the additional costs of maintaining an aging fleet, the backlogs and funding shortages that were plaguing Amtrak’s equipment overhaul program, and the need for substantial capital improvements and modernization at maintenance and overhaul facilities. We also commented on the shrinking availability of federal funds to meet new capital investment needs. Our ongoing work, the results of which we expect to report later this year, is looking at these issues. The preliminary results of our work indicate that Amtrak has made some progress in addressing capital needs, but the going has been slow, and in some cases Amtrak may be facing significant future costs. For example, we reported in February 1995 that about 31 percent of Amtrak’s passenger car fleet was beyond its useful life—estimated at between 25 and 30 years—and that 23 percent of the fleet was made up of Heritage cars (cars that Amtrak obtained in 1971 from other railroads) that averaged over 40 years old. Since our report, the average age of the passenger car fleet has declined from 22.4 years old (in fiscal year 1994) to 20.7 years old (at the end of fiscal year 1996), and the number of Heritage cars has declined from 437 to 246. This drop is significant because Heritage cars, as a result of their age, were subject to frequent failures, and their downtime for repair was about 3 times longer than for other types of cars. However, these trends may be masking substantial future costs to maintain the fleet. In October 1996, about 53 percent of the cars in Amtrak’s active fleet of 1,600 passenger cars averaged 20 years old or more and were at or approaching the end of their useful life (see app. VIII). It is safe to assume that as this equipment continues to age, it will be subject to more frequent failures and require more expensive repairs. Our ongoing work also shows that the portion of Amtrak’s federal capital grant available to replace assets has continued to shrink. In February 1995, we reported that an increasing portion of the capital grant was being devoted to debt service, overhauls of existing equipment, and legally mandated uses, such as equipment modifications and environmental cleanup. In fiscal year 1994, only about $54 million of Amtrak’s federal capital grant of $195 million was available to purchase new equipment and meet other capital investment needs. Since our report, although the portion of the capital grant available to meet general capital investment needs increased in fiscal years 1995 and 1996, it shrunk in fiscal year 1997 (see app. IX). In fiscal year 1997, only $12 million of the capital grant of $223 million is expected to be available for general capital needs. The rest will be devoted to debt service ($75 million), overhauls of existing equipment ($110 million), or legally mandated work ($26 million). It is likely that as Amtrak assumes increased debt (including capital lease obligations) to acquire equipment and as the number of cars in Amtrak’s fleet that exceed their useful life increases, even less of Amtrak’s future capital grants will be available to meet capital investment needs. Amtrak’s ability to reach operating self-sufficiency by 2002 will be difficult given the environment within which it operates. Amtrak is relying heavily on capital investment to support its goal of eliminating federal operating subsidies. For example, Amtrak’s draft fiscal year 1997-2002 Strategic Capital Plan indicates that about 830 million dollars’ worth of actions needed to close gaps in the operating budget through 2002 is directly linked to capital investments. To support these actions, Amtrak anticipates significantly increased federal capital assistance—about $750 million to $800 million per year. In comparison, in fiscal year 1997, Amtrak received federal capital funding of $478 million. Amtrak would like this increased assistance to be provided from a dedicated funding source. Given today’s budget environment, it may be difficult to obtain this degree of increased federal funding. In addition, providing funds from a dedicated source—such as the federal Highway Trust Fund—may not give Amtrak as much money as it expects. Historically, spending for programs financed by this Trust Fund, such as the federal-aid highway program, has generally been constrained by limiting the total amount of funds that can be obligated in a given year. Amtrak is also subject to the competitive and economic environment within which it operates. We reported in February 1995 that competitive pressures had limited Amtrak’s ability to increase revenues by raising fares. Fares were constrained, in part, by lower fares on airlines and intercity buses. From fiscal year 1994 to fiscal year 1996, Amtrak’s yield (revenue per passenger mile) increased about 24 percent, from 15.4 cents per passenger mile to about 19.1 cents. In comparison, between 1994 and 1995, airline yields declined slightly, intercity bus yields increased 18 percent, and the real price of unleaded regular gasoline increased a little less than 1 percent. However, it appears that Amtrak’s ability to increase revenues through fare increases has come at the expense of ridership, the number of passenger miles, and the passenger miles per seat-mile (load factor). Between fiscal years 1994 and 1996, all three declined. Such trade-offs in the future could limit further increases in Amtrak’s yield and ultimately revenue growth. Finally, Amtrak will continue to find it difficult to take those actions necessary to further reduce costs. These include making the route and service adjustments necessary to save money and to collectively bargain cost-saving productivity improvements with its employees. During fiscal year 1995, Amtrak was successful in reducing and eliminating some routes and services. For example, on seven routes Amtrak reduced the frequency of service from daily to 3 or 4 times per week, and on nine other routes various segments were eliminated. Amtrak estimates that such actions saved about $54 million. Amtrak was less successful in making route and service adjustments planned for fiscal year 1997 and estimates that its failure to take these actions will increase its projected fiscal year 1997 loss by $13.5 million. Amtrak has also been unsuccessful in negotiating productivity improvements with labor unions. Such improvements were expected to save about $26 million in fiscal year 1995 and $19.0 million in fiscal year 1996. According to an Amtrak official, over the last 2 years Amtrak has not pursued negotiations for productivity improvements. Amtrak’s financial future has been staked on the ability to eliminate federal operating support by 2002 by increasing revenues, controlling costs, and providing customers with high-quality service. Although the business plans have helped reduce net losses, Amtrak continues to face significant challenges in accomplishing this goal, and it is likely Amtrak will continue to require federal financial support—both operating and capital—well into the future. Madam Chairwoman, this concludes my testimony. I would be happy to respond to any questions that you or Members of the Subcommittee may have. The appropriations for fiscal year 1993 include $20 million in supplemental operating funds and $25 million for capital requirements. The appropriations for fiscal year 1997 include $22.5 million in supplemental operating funds and $60 million for the Northeast Corridor Improvement Program. For fiscal year 1997, an additional $80 million was appropriated to Amtrak for high-speed rail. Amounts are in current year dollars. Amounts are in current year dollars. In 1996 dollars, working capital declined from $149 million in fiscal year 1987 to a deficit of $195 million in fiscal year 1996. 6% Horizon (7.1 years) Superliner II (1.5 years) 3% Turboliner (21.0 years) 1% Capitoliner (29.8 years) 2% Viewliners (0.9 years) Amfleet I (20.9 years) Heritage Passenger (43.0 years) Baggage/Autocarrier (39.7 years) Superliner I (16.7 years) 8% Amfleet II (14.7 years) The age of the baggage and autocarrier cars is a weighted average. Amounts for fiscal year 1997 are estimated. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed preliminary information from its ongoing work looking at Amtrak's progress in achieving operating self sufficiency, focusing on: (1) Amtrak's financial condition and progress toward self-sufficiency; (2) Amtrak's need for, and use of, capital funds; and (3) some of the factors that will play a role in Amtrak's future viability. GAO noted that: (1) Amtrak's financial condition is still very precarious and heavily dependent on federal operating and capital funds; (2) in response to its deteriorating financial condition, in 1995 and 1996 Amtrak developed strategic business plans designed to increase revenues and reduce cost growth; (3) however, GAO has found that, in the past 2 years, passenger revenues, adjusted for inflation, have generally declined, and in fiscal year (FY) 1996, the gap between operating deficits and federal operating subsidies began to grow again to levels exceeding that of FY 1994, when the continuation of Amtrak's nationwide passenger rail service was severely threatened; (4) at the end of FY 1996, the gap between the operating deficit and federal operating subsidies was $82 million; (5) capital investment continues to play a critical role in supporting Amtrak's business plans and ultimately in maintaining Amtrak's viability; (6) such investment will not only help Amtrak retain revenues by improving the quality of its service but will be important in facilitating the revenue growth predicted in the business plans; (7) in 1995 and 1996, GAO reported that Amtrak faced significant capital investment needs to, among other things, bring its equipment and facilities systemwide and its tracks in the Northeast Corridor into a state of good repair and to introduce high-speed rail service between Washington and Boston; (8) Amtrak will need billions of dollars in capital investment for these and other projects; (9) it will be difficult for Amtrak to achieve operating self-sufficiency by 2002 given the environment within which it operates; (10) Amtrak is relying heavily on capital investment to support its business plans, which envision a significant increase in capital funding support--possibly from a dedicated funding source, such as the Highway Trust Fund; (11) while such a source may offer the potential for steady, reliable funding, the current budget environment may limit the amount of funds actually made available to Amtrak; (12) Amtrak is also relying greatly on revenue growth and cost containment to achieve its goal of eliminating federal operating support; and (13) the economic and competitive environment within which Amtrak operates may limit revenue growth, and Amtrak will continue to find it difficult to take those actions necessary, such as route and service adjustments, to reduce costs.
You are an expert at summarizing long articles. Proceed to summarize the following text: DOD’s supply chain is a global network that provides materiel, services, and equipment to the joint force. In February 2015, we reported that DOD had been experiencing weaknesses in the management of its supply chain, particularly in the following areas: inventory management, materiel distribution, and asset visibility. Regarding asset visibility, DOD has had weaknesses in maintaining visibility of supplies, such as problems with inadequate radio-frequency identification information to track all cargo movements. Additionally, in February 2015, we reported on progress DOD had made in addressing weaknesses in its asset visibility, including developing its 2014 Strategy. DOD has focused on improving asset visibility since the 1990s, and its efforts have evolved over time, as shown in figure 1. The 2015 Strategy states that the department introduced automatic identification technology capabilities to improve its ability to track assets. Since we added asset visibility to the high risk list in 2005, we have reported that DOD has made a great deal of progress in improving asset visibility. The 2014 Strategy notes that for more than 25 years, the department has been using technologies, starting with linear bar codes and progressing to a variety of more advanced technologies, with the goal of improving asset visibility. Specifically, the Strategies state that, based on lessons learned from years of war in Iraq and Afghanistan, the department introduced technology capabilities to improve its ability to track assets as they progress from posts, camps, and stations. Additionally, the 2015 Strategy states that DOD has made significant progress toward improving asset visibility, but opportunities for greater DOD-wide integration still exist. DOD has issued two strategies to guide its efforts in improving asset visibility: 2014 Strategy: In January 2014, the department issued its Strategy for Improving DOD Asset Visibility. The 2014 Strategy creates a framework whereby the components work collaboratively to identify improvement opportunities and capability gaps and to leverage technology capabilities, such as radio frequency identification. These capabilities aid in providing timely, accurate, and actionable information about the location, quantity, and status of assets. The 2014 Strategy identified 22 initiatives developed by the components that were intended to improve asset visibility. OSD officials stated that an initiative is conducted in accordance with component-level policy and procedures and can either be for a single component or for potential improvement throughout DOD. According to OSD officials, DOD components develop asset visibility initiatives, and these initiatives may be identified by the Asset Visibility Working Group or by components for inclusion in the Strategies. 2015 Strategy: In October 2015, DOD issued its update to the 2014 Strategy. The 2015 Strategy outlined an additional 8 initiatives developed by the components to improve asset visibility. According to OSD officials, they plan to issue an update to the 2015 Strategy, but the release date for this update has not been determined. These officials stated that the update to the 2015 Strategy will outline about 10 new initiatives. As we reported in January 2015, DOD has taken steps to monitor the asset visibility initiatives. Specifically, DOD has established a structure for overseeing and coordinating efforts to improve asset visibility. This structure includes the Asset Visibility Working Group, which according to the Strategies is responsible for monitoring the execution of the initiatives. Additionally, the components are designated as the offices of primary responsibility to ensure the successful execution of their initiatives, including developing cost estimates and collecting performance data. Working Group members include representatives from OSD and the components—Joint Staff, the Defense Logistics Agency, U.S. Transportation Command, and each of the military services. The components submit quarterly status reports to the Working Group about their initiatives—including progress made on implementation milestones, return on investment, and resources and funding. Additionally, as documented in the minutes from its May 2016 Asset Visibility Working Group meeting, DOD uses an electronic repository that includes information about the initiatives. The 2015 Strategy describes a process in which the Asset Visibility Working Group, among other things, reviews and concurs that an initiative has met its performance objectives. The Asset Visibility Working Group files an after-action report, which is added to the status report, for completed initiatives; this after-action report is to include performance measures used to assess the success of the initiative, challenges associated with implementing the initiative, and any lessons learned from the initiative. For example, an after-action report for the U.S. Transportation Command (U.S. TRANSCOM) active radio frequency identification (RFID) migration initiative stated that U.S. TRANSCOM had successfully tracked the use of old and new active RFID tags on military assets and updated an active RFID infrastructure to accommodate the new tags. DOD components have identified performance measures for the 8 initiatives we reviewed, but the measures do not generally include the key attributes of successful performance measures (i.e., the measures were not generally clear, quantifiable, objective, and reliable). We also found that after-action reports for some initiatives did not always include information on the performance measures and therefore prevent DOD from effectively evaluating the success of the initiatives in achieving the goals and objectives described in the Strategies. DOD components have identified at least one performance measure for each of the 8 initiatives we examined. These initiatives are described in table 1. (For more details on each of the 8 initiatives, see appendix II.) DOD’s Strategies direct that expected outcomes or key performance indicators (which we refer to as performance measures) be identified for assessing the implementation of each initiative. The 2015 Strategy notes that these performance measures enable groups, such as the Asset Visibility Working Group and the Supply Chain Executive Steering Committee—senior-level officials responsible for overseeing asset visibility improvement efforts—to monitor progress toward the implementation of an initiative and to monitor the extent to which the initiative has improved asset visibility in support of the Strategy’s goals and objectives. For example, one of the performance measures for a U.S. TRANSCOM initiative on the migration to a new active radio frequency identification (RFID) tag is to track the use of old and new active RFID tags on military assets. Additionally, one of the performance measures for the Defense Logistics Agency’s (DLA) initiative on passive RFID technology for clothing and textiles is to track the time it takes to issue new uniforms to military personnel. The 2015 Strategy also notes that the performance measures are reviewed before an initiative is closed by the Asset Visibility Working Group. Our prior work on performance measurement has identified several important attributes that performance measures should include if they are to be effective in monitoring progress and determining how well programs are achieving their goals. (See table 2 for a list of selected key attributes.) Additionally, Standards for Internal Control in the Federal Government emphasizes using performance measures to assess performance over time. We have previously reported that by tracking and developing a performance baseline for all performance measures, agencies can better evaluate whether they are making progress and their goals are being achieved. Based on an analysis of the 8 initiatives we reviewed, we found that these performance measures did not generally include the key attributes of successful performance measures. Moreover, DOD’s Strategies lack sufficient direction on how components are to develop measures for these initiatives that would ensure that the performance measures developed include the key attributes for successful measures. This hinders DOD’s ability to ensure that effective measures are developed which will allow it to monitor the performance of the individual initiatives and whether the initiatives are likely to achieve the goals and objectives of the Strategies. We found that some of the performance measures for the 8 initiatives we reviewed included the key attributes of successful performance measures, such as linkage to goals and objectives in the Strategies. However, the measures for most of the initiatives did not have many of the key attributes of successful performance measures. As shown in table 3, for three initiatives there were no clearly identified performance measures; for five there were no measurable targets to allow for easier comparison with the initiatives’ actual performance; for five the measures were not objective; for five the measures were not reliable; for six there were no baseline and trend data associated with the measures; and for three the performance measures were not linked to the goals and objectives of the Strategies. A detailed discussion of our assessment of the performance measures for each key attribute follows: Measures for 5 of the 8 initiatives partially included the key attribute of “clarity.” For example, a performance measure for a Defense Logistics Agency initiative was to reduce the time required to issue uniforms by improving cycle times and reducing customer wait time. We identified “to reduce the time required to issue uniforms” as the name of the measure. However, the definition we identified for this measure, which is to improve cycle times and reduce customer wait time, did not include the methodology for computing the measure. Therefore, for the clarity attribute, we could not determine if the definition of this measure was consistent with the methodology used to calculate it. We reported in September 2015 that if the name and definition of the performance measure are not consistent with the methodology used to calculate it, data may be confusing and misleading to the component. For 3 of the 8 initiatives the performance measures were not clearly stated. For example, a performance measure for an Army initiative was to expand current capabilities by accessing data through a defense casualty system and integrate reporting and tracking into one application. We found that there was an overall description of the initiative, but it did not include a name or definition for the measure or a methodology for calculating it. 2. Measurable Target: Measures for 3 of the 8 initiatives fully included the key attribute of measurable targets. For example, a performance measure for a Joint Staff initiative is to have 100 percent visibility of condition codes for non-munitions inventory. Measures for 5 of the 8 initiatives did not identify a measurable target. For example, a performance measure for a Marine Corps initiative is to increase non-nodal visibility and the delivery status of materiel in transit within an area of responsibility, but the component did not provide a quantifiable goal or other measure that permits expected performance to be compared with actual results so that actual progress can be assessed. Measures for 3 of the 8 initiatives partially included the key attribute of objectivity. For example, the performance measures for a Navy initiative indicated what is to be observed (timeliness, accuracy, and completeness), but the measures did not specify what population and time frames were to be observed. Measures for 5 of the 8 initiatives did not include the key attribute of objectivity. For example, the performance measures for an Army initiative did not indicate what is to be observed, in which population, and in what time frame. Measures for 3 of the 8 initiatives partially included the key attribute of reliability. For example, some of the performance measures for a Navy initiative included data quality control processes to verify or validate information such as automated or manual reviews and the frequency of reviews. However, the Navy did not specify how often it would perform these reviews. Measures for 5 of 8 initiatives did not include the key attribute of reliability. For example, the performance measures for an Army initiative did not include a name for the measures, definitions for these measures, or methodologies for calculating them. Therefore, we could not determine whether the measures would produce the same results under similar conditions. 5. Baseline and Trend data: Measures for 2 of 8 initiatives partially included the key attribute of baseline and trend data. For example, a Joint Staff initiative included a baseline (e.g., improve the visibility of condition codes of non-munitions assets in the Global Combat Support System – Joint (GCSS-J) from 48 percent to 100 percent), but it did not include trend data. Measures for 6 of 8 initiatives did not include the key attribute of baseline and trend data. For example, the performance measures for a U.S. TRANSCOM initiative for implementing transportation tracking numbers did not include baseline and trend data to identify, monitor, and report changes in performance. Measures for 5 of 8 initiatives fully included the key attribute of linkage. For example, the performance measures for the Joint Staff initiative, intended to maximize the visibility of the condition codes of non-munitions assets in GCSS-J to support joint logistics planning, are linked to the 2015 Strategy’s goals of: improving visibility into customer materiel requirements and available resources; o enhancing visibility of assets in transit, in storage, in process, and in theater; and o enabling an integrated accessible authoritative data set. Measures for 3 of the 8 initiatives did not include the key attribute of linkage because they were not aligned with agency-wide goals and mission and were not clearly communicated throughout the organization. These initiatives were identified in the 2014 Strategy and the descriptions of the initiatives did not specify which of the goals and objectives they were intended to support. We reported in January 2015 that the 2014 Strategy did not direct that the performance measures developed for the initiatives link to the goals or objectives in the 2014 Strategy, and we found that it was not clear whether the measures linked to the Strategy’s goals and objectives. Therefore, we recommended that DOD ensure that the linkage between the performance measures for the individual initiatives and the goals and objectives outlined in the 2014 Strategy be clear. DOD concurred with our recommendation and in its 2015 Strategy linked each initiative to the goals and objectives. The deficiencies that we identified in the performance measures can be linked to the fact that the Strategies have not included complete direction on the key attributes of successful performance measures. The 2014 Strategy provided direction on the types of expected outcomes and key performance indicators. For example, an expected outcome is to increase supply chain performance and the key performance indicator is to improve customer wait time. However, when OSD updated the 2014 Strategy it did not include in the 2015 Strategy an example of the types of expected outcomes and key performance indicators for components to use when developing performance measures. The lack of direction on successful performance measures may have resulted in measures that lacked key attributes, such as clarity, measurable target, objectivity, reliability, baseline and trend data, and linkage, as we previously discussed. While OSD officials stated that they believed the performance measures for the selected initiatives were sufficient to report on the status of the initiatives, our review of these measures determined that they could not be used to effectively assess the performance of the initiatives to improve asset visibility. Without sufficient direction in subsequent updates to the Strategy on developing successful performance measures, DOD has limited assurance that the components are developing measures that can be used to determine how the department is progressing toward achieving its goals and objectives related to improving asset visibility. As described in the 2015 Strategy, the Asset Visibility Working Group and the component review the performance of the initiatives during implementation. As we reported in January 2015, the components report quarterly to the Asset Visibility Working Group on the status of their initiatives—including progress made on implementation milestones, return on investment, and resources and funding. We found that DOD components had included performance measures in their quarterly status reports for the 8 initiatives we reviewed. However, DOD components have not always included performance measures to assess the success of their initiatives in after-action reports, which are added to the status report for completed initiatives. To close an initiative, the components responsible for the initiative request closure and the Asset Visibility Working Group files an after-action report, which serves as a closure document and permanent record of an initiative’s accomplishments. According to the 2015 Strategy, the after-action report should include information on the objectives met, problems or gaps resolved, challenges associated with implementing the initiative, any lessons learned from the initiative, and measures of success obtained. The Asset Visibility Working Group approves the closure of initiatives when the components have completed or canceled the initiatives and updated the status report section called the after-action report. Once an initiative is closed, according to DOD officials, the Working Group no longer monitors the initiative, but the components may continue to monitor it. According to these DOD officials, DOD components may update information provided to the Asset Visibility Working Group or the Working Group may request additional information after the initiative is closed, especially when implementation affects multiple components. We found that the after-action reports did not always include all of the information necessary. According to our review of after-action reports, as of October 2016, the Asset Visibility Working Group had closed 5 of the 8 asset visibility initiatives that we examined. Our review of the after- action reports for the 5 closed initiatives found the following: Two reports included information on whether the performance measures—also referred to as measures of success—for the initiative had been achieved. Three reports did not follow the format identified in the 2015 Strategy, and we could not determine whether the intent and outcomes based on performance measures for the initiative had been achieved. We also reviewed after-action reports for the remaining 15 initiatives that were closed and found the following: Seven reports included information on whether the performance measures for the initiative had been achieved. Five reports did not include information on performance measures, because these measures were not a factor in measuring the success of the initiative. One report was not completed by the component. Two reports did not follow the format identified in the 2015 Strategy, and we could not determine whether the intent and outcomes based on performance measures for the initiative had been achieved. Based on our analysis, it appears that while the Asset Visibility Working Group closed 20 initiatives, it generally did not have information related to performance measures to assess the progress of these initiatives when evaluating and closing them. Specifically, the after-action reports for 11 of 20 initiatives did not include performance measures that showed whether the initiative had met its intended outcomes in support of the department’s Strategies. Officials from the Asset Visibility Working Group stated that they generally relied on the opinion of the component’s subject matter experts, who are familiar with each initiative’s day-to-day performance, to assess the progress of these initiatives. While including the input of the component’s subject matter experts for the initiative in the decision to close the initiative is important, without incorporating after-action reports information relating to performance measures into the information considered by the Asset Visibility Working Group, DOD does not have assurance that closed initiatives have been fully assessed and whether they have resulted in achieving the goals and objectives of the Strategies. DOD has fully met three of our criteria for removal from the High Risk List by improving leadership commitment, capacity, and its corrective action plan, and it has partially met the criteria to monitor the implementation of the initiatives and demonstrate progress in improving asset visibility. Table 4 includes a description of the criteria and our assessment of DOD’s progress in addressing each of them. DOD Continues to Fully Meet Our High-Risk Criterion for Leadership Commitment Our high-risk criterion for leadership commitment calls for leadership oversight and involvement. DOD has taken steps to address asset visibility challenges, and we found—as we had in our February 2015 high- risk report—that DOD has fully met this criterion. Senior leaders at the department have continued to demonstrate commitment to addressing the department’s asset visibility challenges, as evidenced by the issuance of DOD’s 2014 and 2015 Strategies. The Office of the Deputy Assistant Secretary of Defense for Supply Chain Integration provides department- wide oversight for development, coordination, approval, and implementation of the Strategies and reviews the implementation of the initiatives. Also, senior leadership commitment is evident in its involvement in asset visibility improvement efforts, including groups such as the Supply Chain Executive Steering Committee—a group of senior- level officials responsible for overseeing asset visibility improvement efforts—and the Asset Visibility Working Group—a group of officials that includes representatives from the components and other government agencies, as needed. The Asset Visibility Working Group identifies opportunities for improvement and monitors the implementation of initiatives. Sustained leadership commitment will be critical moving forward, as the department continues to implement its Strategies to improve asset visibility and associated asset visibility initiatives. DOD Has Fully Met Our High-Risk Criterion for Capacity Our high-risk criterion for capacity calls for agencies to demonstrate that they have the people and other resources needed to resolve risks in the high-risk area. In our October 2014 management letter to a senior OSD official and our January 2015 and February 2015 reports, we noted that resources and investments should be discussed in a comprehensive strategic plan, to include the costs to execute the plan and the sources and types of resources and investments—including skills, human capital, technology, information and other resources—required to meet established goals and objectives. DOD has demonstrated that it has the capacity—personnel and resources—to improve asset visibility. For example, as we previously noted, the department had established the Asset Visibility Working Group that is responsible for identifying opportunities for improvement and monitoring the implementation of initiatives. The Working Group includes representatives from OSD and the components—Joint Staff, the Defense Logistics Agency, U.S. Transportation Command, and each of the military services. Furthermore, DOD’s 2015 Strategy called for the components to consider items such as manpower, materiel, and sustainment costs when documenting cost estimates for the initiatives in the Strategy, as we recommended in our January 2015 and February 2015 reports. For example, DOD identified and broke down estimated costs of $10 million for implementing an initiative to track Air Force aircraft and other assets from fiscal years 2015 through 2018 by specifying that $1.2 million was for manpower, $7.4 million for sustainment, and $1.4 million for one-time costs associated with the consolidation of a server for the initiative. Additionally, DOD broke down estimated costs of $465,000 for implementing an initiative to track Marine Corps assets from fiscal years 2013 through 2015 by specifying $400,000 for manpower and $65,000 for materials. However, in December 2015 we found that the 2015 Strategy included three initiatives that did not include cost estimates. To address this issue, in December 2016, a DOD official provided an abstract from the draft update to the 2015 Strategy that provides additional direction on how to explain and document cases where the funding for the initiatives is embedded within overall program funding. The draft update notes that there may be instances where asset visibility improvements are embedded within a larger program, making it impossible or cost prohibitive to isolate the cost associated with specific asset visibility improvements. In these cases, the document outlining the initiatives will indicate that cost information is not available and why. However, if at some point during implementation some or all costs are identified, information about the initiative will be updated. According to OSD officials, DOD plans to issue the update to the 2015 Strategy, but a release date has not been determined. DOD Has Fully Met Our High-Risk Criterion for a Corrective Action Plan Our high-risk criterion for a corrective action plan calls for agencies to define the root causes of problems and related solutions and to include steps necessary to implement the solutions. The Fiscal Year 2014 National Defense Authorization Act (NDAA) required DOD to submit to Congress a comprehensive strategy and implementation plans for improving asset tracking and in-transit visibility. The Fiscal Year 2014 NDAA, among other things, called for DOD to include in its strategy and plans elements such as goals and objectives for implementing the strategy. The Fiscal Year NDAA also included a provision that we assess the extent to which DOD’s strategy and accompanying implementation plans include the statutory elements. In January 2014, DOD issued its Strategy for Improving DOD Asset Visibility and accompanying implementation plans that outline initiatives intended to improve asset visibility. DOD updated its 2014 Strategy and plans in October 2015. The 2014 and 2015 Strategies define the root causes of problems associated with asset visibility and related solutions (i.e., the initiatives). In our October 2014 management letter to a senior OSD official and our January and February 2015 reports, we found that while the 2014 Strategy and accompanying plans serve as a corrective action plan, there was not a clear link between the initiatives and the Strategy’s goals and objectives. We recommended that DOD clearly specify the linkage between the goals and objectives in the Strategy and the initiatives intended to implement the Strategy. DOD implemented our recommendation in its 2015 Strategy, which includes matrixes that link each of DOD’s ongoing initiatives intended to implement the Strategy to the Strategy’s overarching goals and objectives. DOD also added 8 initiatives to its 2015 Strategy and linked each of them to the Strategy’s overarching goals and objectives. DOD Has Taken Steps to Monitor the Status of Initiatives, but Its Performance Measures Could Not Always Be Used to Track Progress Our high-risk criterion on monitoring calls for agencies to institute a program to monitor and independently validate the effectiveness and sustainability of corrective measures, for example, through performance measures. DOD has taken steps to monitor the status of asset visibility initiatives, but we found that it has only partially met our high-risk criterion for monitoring. In our February 2015 High-Risk update, we referred to a 2013 report in which we had found that DOD lacked a formal, central mechanism to monitor the status of improvements or fully track the resources allocated to them. We also reported that while DOD’s draft 2014 Strategy included overarching goals and objectives that addressed the overall results desired from implementation of the Strategy, it only partially included performance measures, which are necessary to enable monitoring of progress. Since February 2015, DOD has taken some steps to improve its monitoring of its improvement efforts. As noted in the 2015 Strategy, DOD has described and implemented a process that tasks the Asset Visibility Working Group to review the performance of the component’s initiatives during implementation on a quarterly basis, among other things. The Working Group uses status reports from the DOD components that include information on resources, funding, and progress made toward implementation milestones. DOD also identified performance measures for its asset visibility initiatives. However, as previously discussed, the measures for the 8 initiatives we reviewed were not generally clear, quantifiable (i.e., lacked measurable targets and baseline and trend data), objective, and reliable. Measures that are clear, quantifiable, objective, and reliable can help managers better monitor progress, including determining how well they are achieving their goals and identifying areas for improvement, if needed. In December 2016, a DOD official provided an abstract from the draft update to the 2015 Strategy that noted that detailed metrics data will be collected and reviewed at the level appropriate for the initiative. High-level summary metrics information will be provided to the Working Group in updates to the plan outlining the initiatives. The extent to which this planned change will affect the development of clear, quantifiable, objective, and reliable performance measures remains to be determined. Additionally, as discussed previously, while the Asset Visibility Working Group has closed 20 initiatives, it generally did not have information related to performance measures to assess the progress of these initiatives. Specifically, after-action reports from 11 of 20 initiatives— which are added to the status reports for completed initiatives—did not include performance measures that showed whether the initiative had met its intended outcomes in support of the department’s Strategies. Without improved performance measures and information to support that progress has been made, DOD may not be able to monitor asset visibility initiatives. DOD Has Demonstrated Some Progress but Cannot Demonstrate that Its Initiatives Have Resulted in Measurable Outcomes and Improvements for Asset Visibility Our high-risk criterion for demonstrated progress calls for agencies to demonstrate progress in implementing corrective measures and resolving the high-risk area. DOD has made progress by developing and implementing its Strategies for improving asset visibility. In our October 2014 management letter to a senior OSD official and our January and February 2015 reports, we noted that in order to demonstrate progress in having implemented corrective measures, DOD should continue the implementation of the initiatives identified in the Strategy, refining them over time as appropriate. DOD reports that it has closed or will no longer monitor the status of 20 of the 27 initiatives and continues to monitor the remaining 7 initiatives. Additionally, in October 2016, DOD officials stated that they plan to add about 10 new initiatives in the update to the 2015 Strategy. For example, the U.S. Transportation Command’s new initiative, Military Service Air Manifesting Capability, is expected to promote timely, accurate, and complete in-transit visibility and effective knowledge sharing to enhance understanding of the operational environment. OSD officials have not yet determined a date for the release of the update to the 2015 Strategy. As discussed previously, we found that DOD cannot use the performance measures associated with the initiatives to demonstrate progress, because the measures are not generally clear, quantifiable (i.e., lack measurable targets and baseline and trend data), objective, and reliable. Additionally, we found that DOD has not taken steps to consistently incorporate information on an initiative’s performance measures in closure reports, such as after-action reports, in order to demonstrate the extent to which progress has been made toward achieving the intended outcomes of the individual initiatives and the overall goals and objectives of the Strategies. Without clear, quantifiable, objective, and reliable performance measures and information to support that progress has been made, DOD may not be able to demonstrate that implementation of these initiatives has resulted in measurable outcomes and progress toward achieving the goals and objectives in the Strategies. Also, DOD will be limited in its ability to demonstrate sustained progress in implementing corrective actions and resolving the high-risk area. DOD has taken some positive steps to address weaknesses in asset visibility. Long-standing management weaknesses related to DOD’s asset visibility functions hinder the department’s ability to provide spare parts, food, fuel, and other critical supplies in support of U.S. military forces. We previously reported on several actions that we believe DOD should take in order to mitigate or resolve long-standing weaknesses in asset visibility and meet the criteria for removing asset visibility from the High Risk List. We believe that DOD has taken the actions necessary to meet the capacity and action plan criteria by providing additional direction to the components on formulating cost estimates for the asset visibility initiatives. Additionally, DOD linked the 2015 Strategy’s goals and objectives with the specific initiatives intended to implement the Strategy. However, DOD’s efforts to monitor initiatives show that the performance measures DOD components currently use to assess these initiatives lack some of the key attributes of successful performance measures that we have identified. To the extent that these measures lack the key attributes of successful performance measures, they limit DOD’s ability to effectively monitor the implementation of the initiatives and assess the effect of the initiatives on the overall objectives and goals of the Strategies. Developing clear, quantifiable, objective, and reliable performance measures can help DOD better assess department-wide progress against the Strategies’ goals and clarify what additional steps need to be taken to enable decision makers to exercise effective oversight. An important step in determining what effect, if any, the asset visibility initiatives are having on the achievement of the Strategies’ goals and objectives will be to develop sound performance measures and incorporate information about these measures into the after-action reports when evaluating and closing initiatives. Until DOD components demonstrate that implementation of the initiatives will result in measurable outcomes and progress toward achieving the goals and objectives of the Strategies, DOD may be limited in its ability to demonstrate progress in implementing corrective actions and resolving the high-risk area. Once these actions are taken, DOD will be better positioned to demonstrate the sustainable progress needed in its approach to meeting the criteria for removing asset visibility from our High Risk List. We are making two recommendations to help improve DOD’s asset visibility. We recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Logistics and Materiel Readiness, in collaboration with the Director, Defense Logistics Agency; the Secretaries of the Army, Navy, and Air Force; the Commandant of the Marine Corps; the Commander of the United States Transportation Command; and the Chairman of the Joint Chiefs of Staff, to: use the key attributes of successful performance measures—including clarity, measurable target, objectivity, reliability, baseline and trend data, and linkage—in refining the performance measures in subsequent updates to the Strategy to improve DOD’s efforts to monitor asset visibility initiatives; and incorporate into after-action reports information relating to performance measures for the asset visibility initiatives when evaluating and closing these initiatives to ensure that implemented initiatives will achieve the goals and objectives in the Strategies. In its written comments on a draft of this report, DOD partially concurred with our two recommendations. DOD’s comments are summarized below and reprinted in appendix IV. DOD partially concurred with our first recommendation that it use the key attributes of successful performance measures—including clarity, measurable target, objectivity, reliability, baseline and trend data, and linkage—in refining the performance measures in subsequent updates to the Strategy to improve DOD’s efforts to monitor asset visibility initiatives. DOD stated that it recognizes the need for performance measures to ensure the success of an asset visibility improvement effort but noted that the level of complexity and granularity of the metrics we suggest may not be suitable for all initiatives. DOD also stated that the purpose of the Strategy is to create a framework whereby the components can work collaboratively to coordinate and integrate department-wide efforts to improve asset visibility, not to provide complete direction on how to define, implement, and oversee these initiatives. Additionally, DOD stated that the next edition of the Strategy will encourage the adoption of our six key attributes for asset visibility initiatives to the extent appropriate, but will not mandate their use. As discussed in our report, use of the key attributes in measuring the performance of asset visibility initiatives would help DOD to better assess department-wide progress against the goals in its Strategy and clarify what additional steps need to be taken to enable decision makers to exercise effective oversight. Encouraging adoption of the key attributes, as DOD plans to do, is a positive step, but we continue to believe that DOD needs to use these key attributes to refine its performance measures to monitor the initiatives in the future. DOD partially concurred with our second recommendation that it incorporate into after-action reports information relating to performance measures for the asset visibility initiatives when evaluating and closing these initiatives to ensure that implemented initiatives will achieve the goals and objectives in the Strategies. DOD stated that it is important to capture and review performance data prior to a component closing an asset visibility initiative, but that the Strategy after-action report is not intended to be used to evaluate the success of an asset visibility initiative or to determine if an initiative has met its intended objectives. According to DOD, documentation and information to support the evaluation of initiatives is defined by and executed in accordance with component-level policy and procedures. DOD agreed to update its Strategy to clarify the purpose and use of the after-action reports and to ensure that the Strategy specifies roles and responsibilities for evaluating and closing initiatives. DOD’s response, however, did not state whether and how these updates to the Strategy would result in more consistent incorporation of information relating to performance measures when closing initiatives in the future. As we noted previously in this report, according to the 2015 Strategy, the after-action report for closed initiatives should include information on the objectives met, problems or gaps resolved, and measures of success obtained. We believe our recommendation is consistent with this guidance. Without incorporating this information, DOD does not have assurance that closed initiatives have been fully assessed and have resulted in achieving the goals and objectives of the Strategies. Therefore, we continue to believe that full implementation of our recommendation is needed. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force, and the Commandant of the Marine Corps; the Director of Defense Logistics Agency; the Chairman of the Joint Chiefs of Staff; the Commander of the United States Transportation Command; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5257 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix V. To determine the extent to which DOD identified performance measures that allow it to monitor the progress of selected asset visibility initiatives identified in DOD’s 2014 and 2015 Strategy for Improving DOD Asset Visibility (Strategies), we reviewed documents such as the 2014 Strategy and its subsequent update in October 2015 (2015 Strategy); minutes from the Asset Visibility Working Group meetings; and documents showing the status of the implementation, including charts that track the development and closure of the asset visibility initiatives. Thirty initiatives have been included in the 2014 and 2015 Strategies, but 3 of these were halted, for a variety of reasons. From the remaining 27 initiatives, we selected a non-generalizable sample of 8 initiatives. We selected at least one from each of the components to review and assess, including analyzing the performance measures associated with each initiative. In our selection of 8 initiatives to review, we also considered the stage of implementation of the initiative, to ensure that our review encompassed initiatives at different stages, from some that were just beginning to some that had already been completed. Specifically, we made selections based on the status of the initiatives as of December 2015 to include the earliest completion dates by component. In order to cover a range of initiatives— from some just beginning to some already completed—we selected for review 3 initiatives from the 2014 Strategy that had been closed, 2 ongoing initiatives that had been included in both Strategies, and 3 new initiatives that were included for the first time in the 2015 Strategy. The results from this sample cannot be generalized to the other 19 initiatives. We did not assess the initiatives to determine if they (1) met milestones, (2) lacked resources, or (3) had performance issues. Instead we assessed the initiatives to determine what progress DOD had made toward meeting the criteria for removing an area from our High Risk list. We surveyed program managers and other cognizant officials (hereafter referred to as component officials) responsible for the respective asset visibility initiatives we selected. We included questions in our survey related to the development and closure of the initiatives and took several steps to ensure the validity and reliability of the survey instrument. We also reviewed the Strategies to identify performance measures necessary to monitor the progress of the 8 initiatives we had selected. Two analysts independently assessed whether (1) DOD had followed the guidance set forth in the Strategies and (2) the measures for the initiatives included selected key attributes of successful performance measures (for example, are the measures clear, quantifiable —i.e., have measurable targets and baseline and trend data—objective, and reliable); any initial disagreements in assessments were resolved through discussion. We assessed these measures against 6 of 10 selected key attributes for successful performance measures—clarity, measurable target, objectivity, reliability, baseline and trend data, and linkage—identified in our prior work that we identified as relevant to the sample of initiatives we were examining. The remaining 4 attributes—government-wide priorities, core program activities, limited overlap, and balance—are used to assess agency-wide performance and are not applicable to our analysis, because we did not assess agency-wide initiatives. Because we had selected a subset of the component-level initiatives to review, these attributes did not apply. If all of the performance measures for an initiative met the definition of the relevant key attribute, we rated the initiative as having “fully included” the attribute. On the other hand, if none of the measures met the definition of the relevant key attribute, we rated the initiative as having “not included” the attribute. If some, but not all, of the measure met the definition of the relevant key attribute, then we rated the initiative as having “partially included” the attribute. We also selected sites to observe demonstrations of initiatives that were intended to show how they have achieved progress in improving asset visibility. We selected these demonstrations based on the location of the initiative, the responsible component, and the scope of the initiative. Additionally, we reviewed the after-action reports for all of the initiatives that had been closed—20 of 27 initiatives, including 5 of the 8 initiatives we reviewed in detail—by the Asset Visibility Working Group, as of October 31, 2016. We performed a content analysis in which we reviewed each of these after-action reports to determine whether it was completed for the initiative, documented whether measures were obtained, and identified challenges and lessons learned. One analyst conducted this analysis, coding the information and entering it into a spreadsheet; a second analyst checked the first analyst’s analysis for accuracy. Any initial disagreements in the coding were discussed and reconciled by the analysts. The analysts then tallied the responses to determine the extent to which the information was identified in the after-action reports. We also interviewed component officials and officials at the Office of the Deputy Assistant Secretary of Defense for Supply Chain Integration (hereafter referred to as OSD) to clarify survey responses and to discuss plans to develop the initiatives, including any efforts to monitor progress and demonstrate results. To determine whether DOD had addressed the five criteria—leadership commitment, capacity, corrective action plan, monitoring, and demonstrated progress—that would have to be met for us to remove asset visibility from our High Risk List, we reviewed documents such as DOD’s 2014 and 2015 Strategies and charts that track the implementation and closure of asset visibility initiatives. We included questions in our survey to collect additional information from officials on their efforts to address the high-risk criteria. For example, we asked how the component monitors the implementation of the initiative and whether there has been any demonstrated progress in addressing the opportunity, deficiency, or gap in asset visibility capability that the initiative was designed to address. One analyst evaluated DOD’s actions to improve asset visibility against each of our five criteria for removing an area from the High Risk list. A different analyst checked the first analyst’s analysis for accuracy. Any initial disagreements were discussed and reconciled by the analysts. We assessed DOD’s effort to meet each of the high-risk criteria as “not met” (i.e., none of the aspects of the criterion were addressed), “partially met” (i.e., some, but not all, aspects of the criterion were addressed), or “fully met” (i.e., all parts of the criterion were fully addressed). We shared with DOD officials our preliminary assessment of asset visibility relative to each of the criteria. To help ensure that our evaluation of improvements made relative to the high-risk criteria were consistent with our prior evaluations of Supply Chain Management and other issue areas, we reviewed our prior High Risk reports to gain insight into what actions agencies had taken to address the issues identified in these past reports. Additionally, we interviewed component officials and OSD officials to clarify their survey responses and to discuss plans to continue to make progress in improving asset visibility. We met with officials from the following DOD components during our review: Office of the Secretary of Defense Department of the Army United States Marine Corps Department of the Air Force We surveyed component officials responsible for the asset visibility initiatives we reviewed. We included questions in our survey related to our high-risk criteria. As part of the survey development, we conducted an expert review and pre-tested the draft survey. We submitted the questionnaire for review by an independent GAO survey specialist and an asset visibility subject matter expert from OSD. The expert review phase was intended to ensure that content necessary to understand the questions was included and that technical information included in the survey was correct. To minimize errors that might occur from respondents interpreting our questions differently than we intended, we pre-tested our questionnaire with component officials and other cognizant officials for 4 of the initiatives. During the pre-tests, conducted by telephone, we asked the DOD officials to read the instructions and each question aloud and to tell us how they interpreted the question. We then discussed the instructions and questions with them to identify any problems and potential solutions by determining whether (1) the instructions and questions were clear and unambiguous, (2) the terms we used were accurate, (3) the questionnaire was unbiased, and (4) the questionnaire did not place an undue burden on the officials completing it. We noted any potential problems and modified the questionnaire based on feedback from the expert reviewers and the pre-tests, as appropriate. We sent an email to each selected program office beginning on June 16, 2016, notifying them of the topics of our survey and when we expected to send the survey. We then sent the self-administered questionnaire and a cover email to the asset visibility program officials on June 20, 2016, and asked them to fill in the questionnaire and email it back to us by July 6, 2016. We received 8 completed questionnaires, for an overall response rate of 100 percent. We also collected data—such as the number of RFID tags and number of inventory amounts for clothing and textiles—from a sample of initiatives. The practical difficulties of conducting any survey may introduce errors, commonly referred to as non-sampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, how the responses are processed and analyzed, or the types of people who do not respond can influence the accuracy of the survey results. We took steps, as described above, in the development of the survey, the data collection, and the data analysis to minimize these non-sampling errors and help ensure the accuracy of the answers that we obtained. Data were electronically extracted from the questionnaires into a comma-delimited file that was then imported into a statistical program for analysis. We examined the survey results and performed computer analyses to identify inconsistencies and other indications of error, and we addressed such issues as necessary. Our survey specialist conducted quantitative data analyses using statistical software, and our staff conducted a review of open-ended responses with subject matter expertise. A data analyst conducted an independent check of the statistical computer programs for accuracy. We conducted this performance audit from February 2016 to March 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides an overview of the non-generalizable sample of initiatives that we reviewed. These initiatives are intended to improve asset visibility as part of the Department of Defense’s (DOD) 2014 Strategy for Improving DOD Asset Visibility (2014 Strategy) and its subsequent update in October 2015 (2015 Strategy). The process by which we selected these initiatives for this review is described in appendix I. The initiatives are shown in table 5. In 1990, we began a program to report on government operations that we identified as “high risk,” and we added the Department of Defense’s (DOD) supply chain management area to our High Risk List. Our high-risk program has served to identify and help resolve serious weaknesses in areas that involve substantial resources and provide critical services to the public. Our experience with the high-risk series over the past two decades has shown that the key elements needed to make progress in high-risk areas are congressional action, high-level administrative initiatives, and agencies’ efforts grounded in the five criteria we established for removing an area from the high-risk list. These five criteria form a road map for efforts to improve and ultimately address high-risk issues. Addressing some of the criteria leads to progress, while satisfying all of the criteria is central to removing an area from the list. These criteria call for agencies to show the following: 1. Leadership Commitment—a strong commitment and top leadership support. 2. Capacity—the capacity (i.e., the people and other resources) to resolve the risk(s). 3. Corrective Action Plan—a plan that defines the root causes and solutions and provides for substantially completing corrective measures, including steps necessary to implement the solutions we recommended. 4. Monitoring—a program instituted to monitor and independently validate the effectiveness and sustainability of corrective measures. 5. Demonstrated Progress—the ability to demonstrate progress in implementing corrective measures and resolving the high-risk area. We have reported on various aspects of DOD’s supply chain, including asset visibility, and noted that DOD has taken several actions to improve asset visibility. We also noted a number of recommendations, actions, and outcomes needed to improve asset visibility, as shown in table 6. Specifically, in an October 2014 management letter to a senior Office of the Secretary of Defense (OSD) official, we reported on 7 actions and outcomes across the 5 criteria that we believed DOD should take to address long-standing weaknesses in asset visibility. Most recently, in our January 2015 report and February 2015 High Risk update, we reported on progress that DOD has made in addressing weaknesses in its asset visibility, including developing its 2014 Strategy for Improving DOD Asset Visibility, and we made a number of recommendations. In addition to the contact named above, Carleen C. Bennett, Assistant Director; Mary Jo LaCasse; Joanne Landesman; Amie Lesser; Felicia Lopez; Mike Silver; John E. Trubey; Angela Watson; and John Yee made key contributions to this report. High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. High-Risk Series: Key Actions to Make Progress Addressing High-Risk Issues. GAO-16-480R. Washington, D.C.: April 25, 2016. Defense Logistics: DOD Has Addressed Most Reporting Requirements and Continues to Refine its Asset Visibility Strategy. GAO-16-88. Washington, D.C.: December 22, 2015. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Defense Logistics: DOD Has a Strategy and Has Taken Steps to Improve its Asset Visibility, But Further Actions are Needed. GAO-15-148. Washington, D.C.: January, 27, 2015. Defense Logistics: A Completed Comprehensive Strategy is Needed to Guide DOD’s In-Transit Visibility Efforts. GAO-13-201. Washington, D.C.: February 28, 2013. High-Risk Series: An Update: GAO-13-283. Washington, D.C.: February 14, 2013. Defense Logistics: Improvements Needed to Enhance DOD’s Management Approach and Implementation of Item Unique Identification Technology. GAO-12-482. Washington, D.C.: May 3, 2012. Defense Logistics: DOD Needs to Take Additional Actions to Address Challenges in Supply Chain Management. GAO-11-569. Washington, D.C.: July 28, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 16, 2011. DOD’s High-Risk Areas: Observations on DOD’s Progress and Challenges in Strategic Planning for Supply Chain Management. GAO-10-929T. Washington, D.C.: July 27, 2010.
GAO designated DOD's supply chain management as a high-risk area in 1990 and in February 2011 reported that limitations in asset visibility make it difficult to obtain timely and accurate information on assets that are present in a theater of operations. DOD defines asset visibility as the ability to provide timely and accurate information on the location, quantity, condition, movement, and status of items in its inventory. In 2015, GAO found that DOD had demonstrated leadership commitment and made considerable progress in addressing weaknesses in its supply chain management. This report addresses the extent to which DOD has (1) identified performance measures that allow it to monitor the progress of selected asset visibility initiatives identified in its Strategies ; and (2) addressed the five criteria—leadership commitment, capacity, corrective action plan, monitoring, and demonstrated progress—for removing asset visibility from the High Risk List. GAO reviewed documents associated with selected initiatives, surveyed DOD officials, and observed demonstrations. The Department of Defense (DOD) has identified performance measures for the eight selected asset visibility initiatives GAO reviewed, but these performance measures generally cannot be used to monitor progress. Specifically, GAO found that the measures for the eight initiatives reviewed did not generally include key attributes of successful performance measures. For example, for six initiatives there were no baseline and trend data associated with the measures. While DOD's 2014 and 2015 Strategy for Improving DOD Asset Visibility ( Strategies ) called for performance measures to be identified for the initiatives, the Strategies lacked complete direction on how to develop performance measures that would allow DOD to assess the progress of the initiatives toward their intended outcomes. GAO also found that after-action reports for the initiatives did not always include key information needed to determine the success of the initiatives in achieving the goals described in the Strategies . Without improved performance measures and information to support that progress has been made, DOD may not be able to monitor and show progress in improving asset visibility. DOD has made progress and meets the criteria related to capacity and its corrective action plan but needs to take additional actions to monitor implementation and demonstrate progress to meet GAO's two remaining criteria for removal from the High Risk List, as shown in the figure. For the capacity criterion, in its draft update to the 2015 Strategy , DOD provides guidance on how to document cases where the funding for the initiatives is embedded within the overall program funding. Also, for the action plan criterion, DOD included matrixes in its 2015 Strategy to link ongoing initiatives to the Strategy 's goals and objectives. DOD has also taken steps to monitor the status of initiatives. However, the performance measures for the selected initiatives that GAO reviewed generally cannot be used to track progress and are not consistently incorporated into reports to demonstrate results. Until these criteria are met, DOD will have limited ability to demonstrate sustained progress in improving asset visibility. GAO recommends that DOD use key attributes of successful performance measures in refining measures in updates to the Strategy and incorporate information related to performance measures into after-action reports for the asset visibility initiatives. DOD partially concurred with both recommendations. The actions DOD proposed are positive steps, but GAO believes the recommendations should be fully implemented, as discussed in the report.
You are an expert at summarizing long articles. Proceed to summarize the following text: NIH is a Public Health Service (PHS) agency within HHS. It consists of a director’s office and 14 staff offices that oversee the operations of 24 separate units. These units include 17 institutes, each focused on specific health or medical issues, such as cancer or aging; six research centers; and the National Library of Medicine. Each unit separately awards funds for the research it sponsors. NIH’s Office of Extramural Research is responsible for agencywide activities concerning oversight of Phase III clinical trials, such as developing policy on the review, funding, and management of extramural grants. NIH’s extramural research units (generally referred to in this report as “institutes”) used various methods to fund the 470 Phase III clinical trials they sponsored in fiscal year 1994. As figure 1 shows, the largest number of trials (180) were funded through cooperative agreements. Regardless of the method used to fund the trials, the institutions that are awarded the funds are referred to as “grantee institutions” or “grantees.” Most Phase III clinical trials involving multiple sites are funded through contracts and cooperative agreements. Trials funded through contracts are typically planned, initiated, and controlled by the sponsoring NIH institute. The institute details the trial’s objectives, protocols, and controls. Under cooperative agreements, however, the grantees and principal investigators have more flexibility in planning, managing, and conducting the trial. Although the sponsoring institute is expected to make substantial contributions to the trial, such as providing technical assistance, coordinating the trial’s activities, and helping to manage the trial, operational control of the trial rests with the grantee. The institutes and research centers at NIH along with the grantee institutions directly oversee and monitor Phase III clinical trials. These entities are to ensure that controls are in place to prevent or detect the misuse of federal funds and the falsification of data in conducting extramural clinical research. According to NIH, these institutes and grantee institutions know the nature and objectives of the trials and are therefore in the best position to develop monitoring procedures to ensure safety and data integrity. At the sites we visited, controls that safeguard against fiscal misconduct are consistently applied among institutes and trials. Some controls that safeguard against scientific misconduct, however, are not always consistently used for various reasons, including the type of trial and the sponsoring institute’s management philosophy. Although each institute independently oversees the clinical trials it sponsors, the controls established to prevent and detect fiscal misconduct were consistent among the institutes in our review. The control procedures must conform with federal requirements and policies on the expenditure of federal funds. Grantee institutions are responsible for ensuring that their research scientists and other employees comply with all applicable federal rules, regulations, and policies on the use of federal funds. Independent auditors review grantee compliance annually in a required financial audit. Most grantee institutions receive federal funds from several federal agencies. The grantees must adhere to a uniform series of regulations laid out by the Office of Management and Budget (OMB). Chief among these policies are cost principles that grantees must adhere to as specified in OMB Circulars A-21, A-87, or A-122. These principles provide guidance on what expenses a grantee may incur and charge against an NIH grant award. Grantees must also follow a uniform set of administrative requirements in OMB Circulars A-102 or A-110, detailing how grant funds should be managed. Foremost among these requirements are standards for such areas as fiscal reporting, accounting records, internal controls, and cash management. Other administrative requirements cover procurement and property standards. Independent auditors annually audit grantees’ fiscal management of federal funds as required by OMB Circulars A-128 and A-133. It was such an audit that detected the embezzlement of more than $700,000 of NIH grant funds in the early 1990s. This case of fiscal misconduct by a manager of grants accounting occurred at the New York Medical College—the grantee. Because the grantee institution is responsible for ensuring that federal grant funds are properly used, the college was required to fully refund these funds to NIH. At the five grantee institutions we visited, we reviewed the annual financial audits. The audits included a review of internal controls established by the grantees to safeguard federal funds. In each case, the audits disclosed that grantees had complied with federal guidelines and no material weaknesses were detected in the internal controls. When grantee institutions fail to establish and maintain adequate internal controls and proper accounting procedures to safeguard federal funds, NIH can impose requirements that the grantee must comply with to continue receiving and managing grant awards. In 1995, NIH designated the University of Minnesota, a grantee, an “exceptional organization” because of its failed internal controls and poor accounting procedures. This designation enabled NIH more oversight of its funds than would be feasible under the administrative procedures normally associated with its grant programs. NIH increased the conditions and restrictions attached to the University’s grant award. It also required the University to develop and successfully implement a corrective action program to address the deficiencies before NIH would consider removing the exceptional organization designation. Each institute assigns grants management officers to clinical trials to oversee the use of federal funds awarded to grantee institutions. One method used by the grants management officers to reduce the agency’s risk is to limit the amount of funds readily available to the grantees. For instance, because a cooperative agreement usually covers more than 1 year, the initial award specifies how much funding will be provided each year for the life of the agreement. However, funding is provided on a year-to-year basis only. The grantee must apply each year for a continuation award for additional funding even though the total grant amount is committed. Institutes release funds on the basis of satisfactory performance as detailed in the annual progress reports that principal investigators must submit. If a grantee’s progress is not satisfactory, a grants management officer may reserve all or some of the funding until grantee progress improves. Institutes awarding funds for clinical trial research issue award notices that include a section indicating whether any of the funding is restricted and what must be done to lift the restriction. If a grantee’s funds are restricted, the grants management officer might release the funds but restrict their use until the grantee has completed certain tasks. For example, in one trial we reviewed, the officer restricted administrative funds until the sites developed a contractual agreement indicating how they would work together. In addition to annual progress reports, the grantee must include a summary of annual expenditures in its Financial Status Report to the grants management officer. This allows the officer to compare the reported overall expenditure totals with the original budget and progress reports. If the officer finds any significant differences, the grantee is expected to explain them. For Phase III clinical trials funded through contracts or cooperative agreements, according to NIH, when grantees do not spend funds as budgeted, grants management officers must approve all requests to rebudget funds as well as requests to carry over funds from one year to the next. Institutes’ oversight monitoring of clinical trials has some consistent safeguards against scientific misconduct and protections for the safety of trial participants. Each institute usually requires specific monitoring methods. For example, an NIH program officer is assigned by the sponsoring institute to monitor each Phase III trial. Program officers are research scientists with expertise in the area being studied. Each institute trains and develops its own program officers in monitoring and managing clinical trials. Therefore, program officers’ training can vary by institute.Also, their responsibilities often vary by the institute’s management philosophy, the type of trial, and the funding method—contracts or cooperative agreements. Typically, program officers, at a minimum, rely on basic oversight controls in monitoring clinical trials, including annual progress and budget reports and trial participants’ recruitment and retention statistics. Oversight boards also monitor trials. For example, most clinical trials that pose a potential hazard to human trial participants must be monitored by a Data and Safety Monitoring Board or an equivalent. This board, composed of scientists not connected with the trial, monitors a trial’s clinical data and progress. The board also focuses on reported adverse events—adverse changes in the health status of a human research subject in a clinical trial. In addition to a Data and Safety Monitoring Board, each grantee institution must establish an Institutional Review Board to approve and monitor all research involving human subjects. An important function of this board is to review and approve informed consent forms, making sure they have been signed. All prospective research subjects must sign consent forms that explain the objectives, risks, and benefits of the proposed research before they can participate in a trial. In our review of clinical records at the five sites we visited, we did not find any cases in which a consent form had not been signed by a trial participant. However, according to a report on the NCI-sponsored breast cancer trial, only about 71 percent of trial participants gave written informed consent before surgery; consent forms were missing or not available or data were insufficient for 7 percent of the participants. Clinical trials have controls that safeguard against scientific misconduct, including direct data verification to ensure data integrity. Because institutes and grantees, however, have flexibility in deciding how these controls are used, the application of the controls often varies by institute and type of trial. One control designed to safeguard trials against scientific misconduct is the use of clinical monitors to review trial data. These monitors visit clinical trial sites to verify that the established protocols are being followed and that the data being reported match the data in the clinical records. In one trial sponsored by the National Eye Institute, clinical monitors found that clinical test results were being entered on data collection sheets and not in the patients’ medical records. Clinical research policy states that medical records are the acceptable source documents for clinical test results so monitors required that the site also enter reported data in the patients’ medical records. Because clinical monitors add both expense and time to a trial, institutes tend to use them only in the large and more complex trials. For example, clinical monitors are being used in NIA’s largest and most expensive ongoing Phase III clinical trial—alternative therapies for Alzheimer’s disease. This trial is being conducted at 35 research sites and costs $16.9 million in NIH funds. The NIA program officer for the Alzheimer’s trial estimated that using clinical monitors in this trial delayed data entry by 6 months. This delay is acceptable, however, because of the increased quality assurance that clinical monitors bring to the trial, according to the program officer. Another internal control procedure to protect data integrity is the use of data coordinating centers. Most Phase III clinical trials that have multiple research sites use data coordinating centers to process patient clinical data generated during a trial. These centers inspect the data for inconsistencies among the sites, irregularities, and fraud. In one NIA-sponsored trial, Continence Program for Women, the data coordinating center detected data inconsistencies between two clinical sites and alerted both the institute and the Data and Safety Monitoring Board. The inconsistency was caused by a different classification of patients by the two sites. However, the center’s detection of the data problem allowed the problem to be corrected. In an NHLBI-sponsored trial, the data coordinating center questioned test results from one laboratory. Further investigation by NIH’s Office of Research Integrity revealed that a lab technician had not conducted the tests as required and had reported false test results. NHLBI took corrective action to minimize the damage to the trial. The institute also recovered funds paid to the laboratory, and the technician was sanctioned. NHLBI officials believe that the independence of data coordinating centers is an essential part of internal controls. It is a way for the institute to create a direct link to a key data verification point and to help ensure prompt notification of potential scientific misconduct or other data irregularities. For this reason, NHLBI directly funds data coordinating centers and requires that the heads of the centers report directly to the institute’s program officer and the Data and Safety Monitoring Board. This approach places data coordinating centers beyond the direct control of a trial’s principal investigator. Other institutes that have not provided for data coordinating centers’ independence in trials have experienced problems with researchers’ influence over the centers. For example, for the three NIA trials included in our review, data coordinating centers were funded through a subcontract with research centers. This arrangement allowed a lead researcher, in a dispute with the center, to withhold the center’s operating funds. The institute’s program officer had to intervene to resolve the situation. During our review, NIA’s policy was to independently fund data coordinating centers for most of its multisite clinical trials. In the breast cancer trial, NCI permitted the trial’s principal investigator to oversee the operations of the data coordinating center. When the center detected suspect data, the principal investigator was notified. The investigator took about 3 months to establish that fraud had occurred and another 5 months before notifying NCI. The investigator’s failure to promptly notify NCI as required delayed corrective action and jeopardized the integrity of the trial. NCI had to spend time and resources to revalidate the trial’s initial results. NIH conducts limited centralized monitoring of Phase III clinical trials. No agencywide registry or database exists to track progress and performance of all clinical trials and provide NIH’s management with comprehensive reports for oversight and decisionmaking purposes. Although periodic meetings occur to discuss progress of ongoing trials, no data are systematically collected nor used to provide centralized oversight. Furthermore, NIH has not adopted its internal committee’s recommendations to develop agencywide guidance on quality assurance measures and data monitoring procedures for institutes to use in managing clinical trials. According to NIH, some of its institutes have selectively adopted some of the committee’s recommendations, but the agency believes adopting these policies agencywide is inappropriate because this erroneously assumes that all clinical trials should be monitored in the same manner. Nonetheless, NIH and HHS have implemented some agencywide measures in the past designed to discourage misconduct in federally funded research. Even though NIH’s Office of Extramural Research is responsible for centralized activities concerning oversight of extramural research, such as developing policy on the review, funding, and management of clinical trials, it has limited knowledge of and data on the Phase III clinical trials NIH funds and the performance of individual institutes and grantees. The office does review institutes’ initial requests for Phase III clinical trial research. Once a request is reviewed and ultimately approved, however, the awarding of the grants and most of a trial’s oversight and management are left to individual institutes. The Office of Extramural Research might learn of a trial’s progress from meetings of the Extramural Program Management Committee, whose members are staff from each institute. The committee meets regularly to discuss, among other issues, those related to Phase III clinical trials and to exchange ideas. However, unless an institute’s representative mentions a problem with a trial or raises concern about fiscal or scientific misconduct, the committee or the Office of Extramural Research would not likely know about it. NIH has not yet implemented a centralized database or a central trial registry to improve its oversight of the clinical trials it funds. An automated database of all clinical trials could track progress and performance and generate reports that would increase management’s knowledge about the trials and improve its ability to oversee them. Because no active central trial registry exists, NIH would have to survey each institute just to determine the total number of Phase III clinical trials it funds. The NIH Revitalization Act of 1993 required NIH to develop a registry of clinical trials involving women’s health issues. NIH, however, decided not to limit this registry to trials involving women’s health but to include other trials. NIH’s Office of Extramural Research is developing the Streamlined Non-Competing Award Process (SNAP) database as a pilot experiment. According to NIH, this database will allow it to interact with the grantee institutions and monitor research progress. NIH expects that when SNAP is expanded to include all clinical trials, NIH staff will be able to monitor trial progress in areas such as recruitment. Because the NIH institutes and grantees have more flexibility in deciding how to manage clinical trials funded through cooperative agreements, the scientific controls used in such trials vary. Aware of this variability, NIH’s Office of Extramural Research established the NIH Working Committee on Clinical Trial Monitoring in June 1994 to determine how its institutes manage clinical trials. The committee members were representatives from the institutes and research centers and were selected for their expertise in various aspects of clinical research. The committee’s task was to specifically review how the institutes manage the Phase III clinical trials they sponsor. On the basis of its review, the committee decided in 1995 that attempting to develop standards to dictate how these trials are managed is inadvisable given the unique characteristics of each Phase III clinical trial and the diverse nature of the institutes. The committee did recommend, however, that NIH consider formulating guiding principles for all the institutes to follow in managing the trials. The principles the committee recommended covered such areas as quality assurance and site monitoring, patient confidentiality, level of NIH staff involvement, and data access. Specifically, for example, in the area of quality assurance and site monitoring, the committee recommended that the institutes, at a minimum, conduct regular on-site monitoring of all clinical centers and monitor key outcome data. It also recommended that trials involving multiple clinical centers, large study populations, or potentially harmful interventions have the substantial involvement of and close oversight by the sponsoring institute. NIH has decided not to adopt any of the committee’s recommendations agencywide. According to NIH, adopting agencywide policies such as those the committee recommended is inappropriate because it assumes that all clinical trials should be monitored in the same way. At one of the data coordinating centers we visited, officials expressed frustration because standards and procedures for data collection differ by institute as well as among program officers at the same institute. They believe that minimal data collection standards and procedures should be established for all trials. Prompted by legislation and on their own initiative, HHS and NIH have taken steps to discourage misconduct in federally sponsored research, including clinical trials. These efforts have focused mainly on establishing proper scientific conduct and conflict-of-interest reporting requirements for grantee institutions. In response to the Health Research Extension Act of 1985, HHS required each grantee institution to develop a formal process delineating the steps to be taken to resolve allegations of scientific misconduct. In addition, institutions are required to diligently try to protect the positions and reputations of whistleblowers. ORI monitors compliance with this requirement. As required by the NIH Revitalization Act of 1993, HHS recently took action to ensure that the design, conduct, or reporting of PHS-funded research is not affected by researchers’ outside financial interests. This applies also to all NIH-sponsored research. Specifically, HHS issued a regulation effective October 1, 1995, requiring that each grantee institution develop a conflict-of-interest policy applicable to all staff benefitting from PHS funding. To comply with this regulation, researchers must file annual financial disclosure forms that allow the institution to determine if a conflict of interest exists. All applications for PHS funding must contain a certification by the institution that no conflict of interest exists. Each of the five grantee institutions we visited had developed and implemented conflict-of-interest policies. Because of the newness of the policies, however, officials said it would take time to see how these policies operated in practice and how effective the policies would be. A large percentage of NIH-sponsored Phase III clinical trials are funded through cooperative agreements so both the institutes and grantees are involved in managing the trials and developing procedures for conducting them, according to NIH. The trials have controls designed to safeguard against fiscal and scientific misconduct that the institutes, grantee institutions, and research sites can apply in overseeing the trials. However, no practical level of oversight and controls can completely eliminate the potential for misconduct. Most oversight of these trials is decentralized and performed independently by each of the different institutes that sponsor clinical research and by the grantee institutions. Because of the large number of diverse Phase III clinical trials NIH funds and the independent nature of its institutes, NIH charged a working committee with determining how such trials are managed. The committee recommended that NIH develop some agencywide guidance for all institutes to follow in managing these trials. The guidance was recommended for areas such as quality assurance, site monitoring, and the level of NIH staff involvement. Although some institutes have implemented some of the principles, NIH believes adopting them agencywide is inappropriate. In the past, NIH has done little centralized oversight and monitoring of the trials it funds and the institutes that sponsor them, except for tracking women’s and minorities’ participation in clinical trials. NIH is, however, developing a database that it expects will allow for monitoring elements of clinical trials’ progress and performance. In commenting on a draft of this report, NIH agreed in general with our conclusions and noted that the report provides a balanced discussion of the relevant issues. (See app. II.) NIH also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees, the Secretary of HHS, the Director of NIH, and other interested parties. We also will make copies available to others on request. If you or your staff have any questions about this report, please call me at (202) 512-7119. Other major contributors to this report are listed in appendix III. To determine how NIH provides oversight to protect Phase III clinical trials from fiscal and scientific misconduct, we conducted audit work at NIH; National Institute on Aging (NIA); and National Heart, Lung, and Blood Institute (NHLBI). We selected NIA because it is among the institutes that provide the smallest amount of funding for Phase III clinical trials and NHLBI because it is among the institutes that provide the largest amount of funding. In fiscal year 1995, NIA sponsored 7 clinical trials costing about $9 million, and NHLBI sponsored 42 trials costing about $73 million. Selecting these institutes for review provided some perspective on whether oversight might be influenced by the size of an institute’s clinical trial portfolio. Also, these two institutes offered a variety of trials from which to select for review. We limited the scope of our review to Phase III clinical trials funded through cooperative agreements. Under cooperative agreements, grantee institutions have more flexibility in planning, conducting, and managing the trials than under contracts, the other major funding method for Phase III clinical trials. NIH institutes that sponsor the trials are expected to provide assistance to and oversight of the trials. Our review included a nonstatistical sample of four multisite clinical trials that varied in nature, size, complexity, and number of sites (see table I.1). We visited five of the clinical research sites that participated in the trials and two data coordinating centers that processed and monitored the clinical data. The clinical sites we visited were either state or private institutions located in Virginia, Connecticut, and Massachusetts. The data coordinating centers we visited differed in how they were funded. FY 95 funding (000s) To determine the oversight roles played by NIH, the institutes, and the institutions receiving research funds, we conducted interviews, reviewed NIH rules and regulations, examined NIH studies and reports, and reviewed grant documents on the chosen Phase III clinical trials. We interviewed agency officials from NIH, NIA, and NHLBI. Within NIH, we interviewed officials from the Office of Extramural Research and the Office of Research on Women’s Health. At NIA and NHLBI, we interviewed senior officials, grants management personnel, and program management officers. We also met with staff from HHS’ Office of Research Integrity to discuss their role in investigating allegations of misconduct and the Office of the Inspector General, which was investigating allegations of scientific misconduct. We also met with the principal research investigators, key research personnel, grants and fiscal management officials, and internal audit staff at the research sites to get their views on oversight responsibilities and controls that protect trials against misconduct. We reviewed grantee institutions’ policies and procedures for preventing, detecting, and resolving scientific misconduct, conflicts of interest, and fiscal mismanagement. Also, we examined research documentation, clinical records, correspondence, and external audits of the institutions. To determine what controls exist at the central data processing point to help ensure clinical data integrity, we visited two data coordinating centers. One of the centers was funded independently of the clinical sites; the other’s funding was included in the research center’s grant award. At the coordinating centers, we observed their operation, reviewed their policies and procedures, and interviewed key personnel about the centers’ data collection and analysis role and responsibilities. We examined reports generated by the centers and observed the procedures they use to ensure consistency of each clinical site’s data collection and recording methodology. We also established how research data are analyzed to detect data problems and reviewed the follow-up procedures the centers use when potential problem data are discovered. Our work was performed between September 1995 and May 1996 in accordance with generally accepted government auditing standards. James O. McClyde, Assistant Director, (202) 512-7152 Frank F. Putallaz, Evaluator-in-Charge, (617) 565-7527 Thomas S. Taydus, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the National Institutes of Health's (NIH) oversight of the clinical trials it sponsors, focusing on NIH internal controls to prevent misuse of federal funds and safeguard the integrity of clinical trial data. GAO found that: (1) individual NIH institutes and grantee institutions oversee and monitor NIH-funded Phase III clinical trials; (2) internal controls to guard against fiscal misconduct in extramural research must comply with regulations and policies on federal funds expenditures and be consistently applied; (3) independent auditors review grantees' compliance with these internal control regulations during annual audits; (4) NIH imposes sanctions on offending grantees when scientific misconduct occurs; (5) internal controls to guard against scientific misconduct and ensure participants' safety are generally consistent, but they vary slightly among trials due to differences in the sponsoring institute's management philosophy and past experience, the trial's size, nature, and complexity, and the way the trial is funded; (6) NIH institutes and grantees use clinical monitors and data coordinating centers to ensure data integrity, but these controls are not consistently applied; (7) one institute provides direct funding for certain data verification functions which ensures prompt reporting of data concerns and potential misconduct to NIH, but this is not an agencywide policy for multisite trials; (8) central NIH oversight and monitoring of clinical trials is limited; and (9) NIH has not adopted on an agencywide basis, any of its study committee's recommendations to improve clinical trial oversight because it believes agencywide monitoring policies are inappropriate.
You are an expert at summarizing long articles. Proceed to summarize the following text: For fiscal year 2015, VA estimated it received $59.2 billion in appropriations, including collections, to fund health care services for veterans, manage and administer VA’s health care system, and operate and maintain the VA health care system’s capital infrastructure. VA estimated that in fiscal year 2015 it provided health care services— including inpatient services, outpatient services, and prescription drugs— to 6.7 million eligible patients. For calendar year 2015, the Medicare Trustees estimated that CMS paid MA plans about $155 billion to provide coverage for 16.4 million Medicare beneficiaries. Beneficiaries of MA can enroll in one of several different plan types, including health maintenance organizations (HMO), private fee-for-service (PFFS) plans, preferred provider organizations (PPO), and regional PPOs. Medicare pays MA plans a capitated PMPM amount. This amount is based in part on a plan’s bid, which is its projection of the revenue it requires to provide a beneficiary with services that are covered under Medicare FFS, and a benchmark, which CMS generally calculates from average per capita Medicare FFS spending in the plan’s service area and other factors. If a plan’s bid is higher than the benchmark, Medicare pays the plan the amount of the benchmark, and the plan must charge beneficiaries a premium to collect the amount by which the bid exceeds the benchmark. If the plan’s bid is lower than the benchmark, Medicare pays the plan the amount of the bid and makes an additional payment to the plan called a rebate. Plans may use this rebate to fund benefits not covered under Medicare FFS. CMS uses risk scores to adjust PMPM payments to MA plans to account for beneficiaries’ health status and other factors, a process known as risk adjustment. For beneficiaries enrolled in MA, risk scores are generally determined on the basis of diagnosis codes submitted for each beneficiary, among other factors, and are adjusted annually to account for changes in diagnoses from the previous calendar year. In addition, risk scores for beneficiaries who experience long-term stays of more than 90 days are calculated differently to account for the differences in expected health expenditures. While risk scores are based on diagnoses from the previous year, changes to the risk score to account for long-term hospital stays of more than 90 days are reflected in the calendar year when the stay occurred. The Patient Protection and Affordable Care Act (PPACA) changed how benchmarks are calculated so that they will be more closely aligned with Medicare FFS spending. Specifically, the benchmark changes, which are to be phased in from 2012 through 2017, will result in benchmarks tied to a percentage of per capita Medicare FFS spending in each county. In general, for those counties in the highest Medicare FFS spending quartile, benchmarks will be equal to 95 percent of county per capita Medicare FFS spending, and for those counties in the lowest Medicare FFS spending quartile, benchmarks will be equal to 115 percent of per capita Medicare FFS spending. Prior to 2012, benchmarks in all counties were at least as high as per capita Medicare FFS spending, but were often much higher. For example, while counties generally had benchmarks that were derived from per capita county Medicare FFS spending, the benchmarks were generally increased annually by a minimum update equal to the national growth rate percentage in Medicare FFS spending. In cases where the growth rate used to update the benchmark was greater than the rate at which per capita Medicare FFS spending grew within a county, it would result in a benchmark that was higher than the average per capita county Medicare FFS spending rate. In addition, some urban and rural counties had benchmarks that were “floor” rates, which were set above per capita county Medicare FFS spending rates to encourage insurers to offer plans in the areas. According to a CMS study reported in the 2010 MA Advance Notice, approximately 96 percent of counties had benchmarks that were set based on a minimum update or were floor rates. Especially in counties with a relatively high proportion of veterans, average per capita Medicare FFS spending may be low if many veterans receive health care services from VA instead of Medicare providers. Because benchmarks are calculated based in part on Medicare FFS spending, MA payments may be lower in such counties and may not reflect Medicare’s expected cost of caring for nonveterans. CMS is required to estimate, on a per capita basis, the amount of additional Medicare FFS payments that would have been made in a county if Medicare-eligible veterans had not received services from VA. If needed, CMS is also required to make a corresponding MA payment adjustment. To address these requirements, CMS reported the results of its study analyzing the cost impact of removing veterans eligible to receive services from VA on 2009 Medicare FFS county rates in the 2010 MA Advance Notice. CMS reported that, on average, removing veterans from the calculation of counties’ per capita Medicare FFS spending rate had minimal impact on per capita spending and that the differences in expenditures between all Medicare beneficiaries and nonveterans were more attributable to normal, random variation than to distinctly different spending for the two populations. Based on CMS’s study results, the agency concluded that no adjustment for VA spending on Medicare- covered services was necessary to 2010 through 2016 MA payments. In 2016, CMS updated its 2009 study using more recent data and determined that an adjustment would be necessary for 2017. VA provided about $2.4 billion in Medicare-covered inpatient and outpatient services to the 833,684 MA-enrolled veterans in fiscal year 2010. In total, VA provided approximately 61,000 inpatient services and 8.2 million outpatient services to veterans enrolled in MA plans. During that same time period, CMS paid MA plans $8.3 billion to provide all Medicare-covered services to veterans enrolled in an MA plan. VA’s provision of services to MA-enrolled veterans resulted in overall payments to MA plans that were likely lower than they otherwise would have been if veterans had obtained all of their Medicare-covered services through Medicare FFS providers and MA plans. Specifically, because VA provides services to MA-enrolled veterans, the three components that determine payments to MA plans—benchmarks, bids, and risk scores— are likely lower than they otherwise would be, which results in lower overall payments to MA plans. Benchmarks—Because benchmarks are generally calculated in part from per capita county Medicare FFS spending rates, any VA spending on Medicare-covered services for veterans enrolled in Medicare FFS would be excluded from the benchmark calculation. As a result, the benchmark would be lower and, in turn, payments to MA plans would also be lower. This would be particularly true following the implementation of the PPACA revisions to the benchmark calculation—to be phased in from 2012 through 2017—as the PPACA revisions further strengthened the link between the benchmark and average per capita county Medicare FFS spending rates. Bids—MA payments also may be lower to the extent that MA plans set bids based on historical experience. MA plan bids may reflect the fact that in previous years enrolled veterans received some Medicare- covered services from VA instead of the MA plan. If so, MA plan bids would be lower and, in turn, MA payments would also be lower. Risk scores—VA’s provision of Medicare-covered services may result in lower risk scores because, like benchmarks, they are calibrated based on Medicare FFS spending for beneficiaries with specific diagnoses identified by Medicare. As a result, any VA spending on Medicare-covered services for veterans enrolled in Medicare FFS that is related to these diagnoses would be excluded when the model is calibrated. In addition, MA plans would generally not have access to diagnoses made by VA. Therefore, when VA identifies and treats a diagnosis not identified by the veteran’s MA plan, it would not be incorporated into the veteran’s risk score. Because PMPM payments to MA plans are risk-adjusted, a lower risk score would result in lower payments to MA plans. Although VA spending on Medicare-covered services likely results in lower CMS payments to MA plans, the extent to which these payments reflect the expected utilization of services by the MA population remains uncertain. Specifically, payment amounts may still be too high or could even be too low, depending on the utilization of VA services by veterans enrolled in MA plans and veterans enrolled in Medicare FFS. As noted earlier, both benchmarks and risk scores are generally calibrated based on veterans and nonveterans enrolled in Medicare FFS. However, veterans enrolled in MA plans may differ in the proportion of services they receive from VA compared to veterans enrolled in Medicare FFS, which would affect the appropriateness of payments to MA plans. For example, payments to MA plans may be too high if veterans enrolled in MA receive a greater proportion of their services from VA relative to veterans enrolled in Medicare FFS. Under this scenario, the benchmark would reflect the higher use of Medicare services by Medicare FFS beneficiaries who are receiving fewer of their services from VA than are veterans enrolled in MA. As a result, the benchmark may be too high and, in turn, payments to MA plans may be too high. This effect of a higher benchmark may be at least partially offset by a risk score that is too high. In contrast, payments to MA plans may be too low if veterans enrolled in MA receive a lesser proportion of their services from VA relative to veterans enrolled in Medicare FFS. Under this scenario, the benchmarks may be too low and may result in MA plans being underpaid, although the effect may be partially offset by risk scores that are too low. To assess whether there are service utilization differences between the MA and Medicare FFS veteran populations that result in payments to MA plans that are too high or too low, data on the services veterans receive from Medicare FFS, MA, and VA would be needed. Data on veterans’ use of services through Medicare FFS and VA health care are available from CMS and VA, respectively. However, CMS does not currently have validated data that could be used to determine veterans’ use of services through MA. CMS began collecting data from MA plans on diagnoses and services provided to beneficiaries starting in January 2012. We reported in July 2014 that CMS had taken some, but not all, appropriate actions to ensure that these data—known as MA encounter data—are complete and accurate. At that time, we recommended that CMS complete all the steps necessary to validate the data, including performing statistical analyses, reviewing medical records, and providing MA organizations with summary reports on CMS’s findings. CMS agreed with the recommendation, but as of August 2015, had not completed all steps needed to validate the encounter data. CMS determined that no adjustment to 2010 through 2016 MA payments was needed to account for the provision of Medicare-covered services by VA, but used a methodology that had certain shortcomings that could have affected MA payments. CMS is required to estimate, on a per capita basis, the amount of additional payments that would have been made in a county if Medicare-eligible veterans had not received services from VA and, if needed, to make a corresponding adjustment to MA payments. If CMS determined that an MA payment adjustment was necessary, it would make the adjustment by using a modified version of per capita county Medicare FFS spending rates that are adjusted to account for the effect of VA spending on Medicare-covered services. Per capita county Medicare FFS spending rates serve as the basis of the benchmarks used in determining MA payment rates. To determine whether an adjustment was needed, CMS obtained data from VA showing veterans who are enrolled in VA health care and Medicare FFS (that is, enrollment data). CMS then estimated the effect of VA spending on Medicare FFS spending by calculating average per capita county Medicare FFS spending for nonveterans and comparing it to the average per capita county Medicare FFS spending for all Medicare FFS beneficiaries, after adjusting for beneficiaries’ risk. However, CMS’s methodology did not account for two factors that could have important effects on the results: (1) services provided by and diagnoses made by VA but not identified by Medicare and (2) changes to the benchmark calculation under PPACA. First, because CMS used only Medicare FFS utilization and diagnosis data in its study, the agency’s methodology did not account for services provided by and diagnoses made by VA—which could result in inaccurate estimates of how VA spending on services for Medicare FFS-enrolled veterans affects per capita county Medicare FFS spending. Only VA’s utilization and diagnosis data can account for services provided by and diagnoses made by VA. Without this information, CMS’s estimate of how VA spending affects per capita county Medicare FFS spending rates may be inaccurate. Specifically, estimates of per capita county Medicare FFS spending for all beneficiaries, including veterans, may be too low because services provided by VA would not be accounted for in Medicare FFS spending. Excluding those services could have the effect of deflating veterans’ risk-adjusted Medicare FFS spending and therefore total per capita county Medicare FFS spending. Conversely, estimates of per capita county Medicare FFS spending for all beneficiaries, including veterans, may be too high because excluding diagnoses identified only by VA could result in Medicare risk scores that are too low, which would have the effect of inflating veterans’ risk-adjusted Medicare FFS spending and therefore total per capita county Medicare FFS spending. Thus, depending on the number and mix of services provided by and the diagnoses made by VA, risk-adjusted Medicare FFS spending for veterans may either be higher or lower than it would be if CMS accounted for VA-provided services and diagnoses. Second, because CMS’s study was done in 2009, it did not account for changes to the benchmark calculation that occurred under PPACA and that are to be phased in from 2012 through 2017. CMS noted in 2009 that only 45 of the 3,127 counties nationwide would have had per capita county Medicare FFS spending rate increases after accounting for VA spending. According to CMS, the number of affected counties was as low as it was in part because many counties had payment rate minimums, which often resulted in benchmarks that were higher than per capita county Medicare FFS spending. However, as noted earlier in this report, PPACA revised the benchmark calculation to more closely align benchmarks with average per capita county Medicare FFS spending rates. As these revised benchmark calculations are implemented, counties will no longer have benchmarks set based on minimum updates or floor rates. Because CMS did not update its 2009 study when determining whether an adjustment was necessary through 2016, the agency lacked accurate information on the number of additional counties in which VA spending on Medicare-covered services would have made a difference in per capita county Medicare FFS spending rates. When CMS updated its 2009 study to determine whether an MA payment adjustment was needed for 2017, it used the same methodology, albeit with more recent data. Doing so allowed CMS to account for the revised benchmark calculations implemented under PPACA. However, CMS cannot address the other limitation we identified without additional data. Specifically, CMS cannot account for services provided by and diagnoses made by VA. Officials said that they did not intend to incorporate VA utilization and diagnoses data into their analysis because they did not currently have such data and that incorporating these data would introduce additional uncertainty into the analysis. For example, CMS officials noted that there would be challenges associated with how much Medicare would have spent if the covered services had been obtained from Medicare providers instead of VA. We agree that CMS would face challenges incorporating VA data into its analysis, but if an adjustment is needed and not made or if the adjustment made is too low, the PMPM payment may be too high for veterans and too low for nonveterans. Depending on the mix of veterans and nonveterans enrolled by individual MA plans, this could result in some plans being paid too much and others too little. Both CMS and VA officials told us that the agencies have a data use agreement in place that allows them to share some data, but this does not include data on services VA provides to Medicare beneficiaries. According to VA, as of December 2015, CMS has not requested its utilization and diagnosis data. Federal standards for internal control call for management to have the operational data it needs to meet agency goals to effectively and efficiently use resources and to help ensure compliance with laws and regulations. In this case, without VA data on diagnoses and utilization, CMS may be increasing the risk that it is not effectively meeting the requirement to adjust payments to MA plans, as appropriate, to account for VA spending on services for Medicare beneficiaries. If CMS revises its study methodology and determines that an adjustment to the benchmark to account for VA spending is needed, it may need to make additional MA payment adjustments to ensure that payments are equitable for individual MA plans. A benchmark adjustment would increase payments for nonveterans and would address the possibility that payments to MA plans with a high proportion of nonveterans would be too low. However, if CMS makes a benchmark adjustment, it would also increase MA payments for veterans. While the resulting higher payment to MA plans for nonveterans may be appropriate, higher payments for veterans may not be because veterans may be receiving some services from VA. In that case, payments to MA plans that enroll veterans would be too high, with the degree of overpayment increasing as the proportion of veterans enrolled by plans increases. To ensure that payments to MA plans are equitable regardless of differences in the demographic characteristics of the plans’ enrollees, CMS is authorized to adjust payments to MA plans based on such risk factors that it determines to be appropriate. Therefore, if CMS determines that an adjustment to the benchmark to account for VA spending is needed and the adjustment results in payments to MA plans that are too high for veterans, additional adjustments to payments to MA plans could be necessary. Given that veterans enrolled in an MA plan and the VA health care system can receive Medicare-covered services from either source, it is important to consider how the provision of services by VA affects payments to MA plans. In fiscal year 2010, VA provided $2.4 billion worth of inpatient and outpatient services to MA-enrolled veterans, which likely resulted in lower overall payments to MA plans. However, the appropriateness of these lower payments is uncertain, given potential differences in the proportion of services veterans enrolled in MA plans and Medicare FFS receive from VA. An estimate of the differences between the two populations of veterans would enable CMS to determine if additional actions are needed to improve the accuracy of PMPM payments. To this end, we recommended in July 2014 that CMS should validate the MA encounter data, which would be needed to determine if there are differences in utilization of services between veterans in MA and Medicare FFS. In addition, it is important to ensure that VA spending on Medicare- covered services does not result in inequitable payments to individual MA plans for veterans and nonveterans. While CMS is required to adjust MA payments to account for VA spending on Medicare-covered services, as appropriate, the agency determined that no adjustment to the benchmark, which is based in part on per capita county Medicare FFS spending, was necessary for years 2010 through 2016. CMS updated the study it used to make this determination in 2016 and determined that an adjustment was necessary for 2017. However, both CMS’s 2009 study and its 2016 study were limited because the agency did not have VA utilization and diagnoses data. Adjusting the study’s methodology to incorporate these data could change the study’s findings and result in CMS making a larger adjustment to the benchmark in future years. Such a benchmark adjustment could improve the accuracy of payments for nonveterans. However, a benchmark adjustment could also result in or exacerbate payments to MA plans that are too high for veterans, so additional MA payment adjustments could become necessary. We recommend that the Secretary of Health and Human Services direct the Administrator of CMS to take the following two actions: Assess the feasibility of updating the agency’s study on the effect of VA-provided Medicare-covered services on per capita county Medicare FFS spending rates by obtaining VA utilization and diagnosis data for veterans enrolled in Medicare FFS under its existing data use agreement or by other means as necessary. If CMS makes an adjustment to the benchmark to account for VA spending on Medicare-covered services, the agency should assess whether an additional adjustment to MA payments is needed to ensure that payments to MA plans are equitable for veterans and nonveterans. We provided a draft of this product to VA and the Department of Health and Human Services (HHS). HHS provided written comments on the draft, which are reprinted in appendix II. Both VA and HHS provided technical comments, which we incorporated as appropriate. In its comments, HHS concurred with one of our two recommendations. HHS agreed with our recommendation that if CMS makes an adjustment to the benchmark to account for VA spending on Medicare-covered services, it should assess whether an additional adjustment to MA payments is needed to ensure that payments to MA plans are equitable for veterans and nonveterans. HHS acknowledged that CMS is required to estimate, on an annual basis, the amount of additional Medicare FFS payments that would have been made in a county if Medicare-eligible veterans had not received services from VA and, if necessary, to make a corresponding MA payment adjustment. In the 2017 MA Advance Notice, CMS provided the results of its updated analysis, which used the same methodology as its 2010 analysis, but with more recent data. Based on its findings, CMS plans to make an adjustment to 2017 MA payment rates to account for VA spending on Medicare-covered services. In its comments, HHS stated that CMS will assess whether an additional adjustment to MA plan payments is needed to ensure that payments to MA plans are equitable for veterans and nonveterans. We encourage CMS to complete its assessment prior to finalizing its 2017 payments to ensure that payments to MA plans will be equitable when the adjustment to account for VA spending on Medicare-covered services is made. HHS did not concur with our recommendation that CMS should assess the feasibility of updating the agency’s study on the effect of VA-provided Medicare-covered services on per capita county Medicare FFS spending rates by obtaining VA utilization and diagnosis data for veterans enrolled in Medicare FFS. HHS stated that CMS uses Medicare FFS spending rates when setting the benchmark, which excludes services provided by VA facilities. In addition, HHS stated that incorporating VA utilization and diagnosis data into CMS’s analysis may not materially improve the analysis and the resulting adjustment. HHS indicated that it will continue to review the need for incorporating additional data or for methodology changes in the future. As we note in the report, only VA’s utilization and diagnosis data can account for services provided by and diagnoses made by VA. Depending on the number and mix of services provided by and the diagnoses made by VA, risk-adjusted Medicare FFS spending for veterans may either be higher or lower than it would be if CMS accounted for VA-provided services and diagnoses. Therefore, relying exclusively on Medicare FFS spending to estimate the effect of VA spending on Medicare FFS-enrolled veterans could result in an inaccurate estimate of how VA spending on services for Medicare FFS-enrolled veterans affects per capita county Medicare FFS spending. While there may be challenges associated with incorporating VA utilization and diagnosis data into CMS’s analysis, we maintain that CMS should work to do so given the implications that not incorporating the data may have on the accuracy of payment to MA plans. We continue to believe that an important first step would be for CMS to assess the feasibility of incorporating VA utilization and diagnosis data in a way that can overcome the challenges identified by CMS and potentially lead to a more accurate adjustment. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of CMS, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This appendix describes the scope and methodology used to (1) estimate the amount that the Department of Veterans Affairs (VA) spends to provide Medicare-covered services to veterans enrolled in Medicare Advantage (MA) plans and how VA spending on these services affects Centers for Medicare & Medicaid Services (CMS) payments to MA plans; and (2) evaluate the extent to which CMS has the data it needs to determine an appropriate adjustment, if any, to MA payments to account for VA’s provision of Medicare-covered services to MA-enrolled veterans. To estimate the amount that VA spends to provide Medicare-covered services to veterans enrolled in MA plans, we first identified veterans with at least 1 month of overlapping enrollment in an MA plan and in VA health care in fiscal year 2010. VA provided us with an enrollment file that included veterans enrolled in VA health care for at least 1 month in fiscal year 2010 and whom VA had identified as having at least 1 month of Medicare private plan enrollment. To determine months of MA enrollment in fiscal year 2010, we matched the VA enrollment file to Medicare’s calendar year 2009 and 2010 Denominator Files based on whether beneficiaries had the same Social Security number and either the same date of birth, sex, or both. We excluded those beneficiaries who did not have at least 1 month of overlapping MA and VA health care enrollment. In addition, we excluded veterans in the VA enrollment file that did not have a VA enrollment start date, were listed as having died prior to fiscal year 2010, or were not enrolled in one of the four most common MA plan types. After all exclusions, we identified 833,684 veterans with at least 1 month of overlapping enrollment in an MA plan and VA health care in fiscal year 2010. We identified all inpatient and outpatient services provided by VA to those veterans in our population during fiscal year 2010. VA can provide inpatient and outpatient services directly at one of its medical facilities or it can contract for care, known as VA care in the community; we received inpatient and outpatient utilization files for both types of VA-provided care. We excluded prescription drug services from our analysis, as payments to MA plans for coverage of Part D services are determined differently than are payments for other Medicare-covered services. We also excluded services that were received during a month when the veteran was not enrolled in both VA health care and an MA plan. We considered an inpatient stay, which can last multiple days, to be during a month when the veteran was enrolled in both VA health care and an MA plan if 1 or more days of the stay occurred during a month in which the veteran was enrolled in VA health care and an MA plan. In some instances, hospital stays had an admittance date prior to fiscal year 2010 or a discharge date after it, and in those cases, we included only the portion of the stay that occurred during fiscal year 2010. We excluded those inpatient and outpatient services that were provided by VA but were not covered by Medicare. For inpatient services directly provided by VA, we used the category of care assigned to each service by VA to exclude service categories not covered by Medicare, such as intermediate and domiciliary care. In addition, we excluded services provided by VA that went beyond Medicare benefit limits. Because MA plans may have different benefit limits than Medicare fee-for-service (FFS), we analyzed the benefits offered by a sample of 45 MA plans for 2014 for services covered by Medicare FFS that have benefit limits. We identified the most common benefit limits for those services and used those as our benefit limits for VA services. In cases where some or all MA plans had service categories with lifetime reserve days (e.g., inpatient days beyond the 90 days Medicare covers per benefit period, up to an additional 60 days per lifetime), we made the assumption that beneficiaries had 25 percent of their lifetime reserve days remaining. For inpatient services provided through VA care in the community, we excluded hospice services; services with cancelled payments; and services with a classification of dental, contract halfway house, pharmacy, reimbursement, or travel. For outpatient services directly provided by VA, we excluded services that were not included in the Medicare physician fee schedule; ambulance fee schedule; clinical lab fee schedule; durable medical equipment, prosthetics/orthotics, and supplies fee schedule; anesthesiology fee schedule; or ambulatory surgical center fee schedule. We also excluded services that had a Medicare physician fee schedule status code indicating they were a deleted code, a noncovered service, had restricted coverage, or were excluded from the physician fee schedule by regulation. For outpatient services provided through VA care in the community, we made the same exclusions as for outpatient services provided by VA and also excluded hospice care services and services with cancelled payments. We calculated total VA spending and CMS payments to MA plans for beneficiaries for months in which they were enrolled in both VA health care and an MA plan in fiscal year 2010 and evaluated how, if at all, VA spending on these services affects CMS payments to MA plans. To calculate VA’s estimated spending, we assigned all Medicare-covered services directly provided by VA a cost, using VA’s average cost data; and for services provided through VA care in the community, we used the amount that VA disbursed to the service provider. We calculated total MA spending for veterans enrolled in MA and VA using actual CMS payments to MA plans for our population in fiscal year 2010. To evaluate how VA spending on Medicare-covered services affects CMS payments to MA plans, we reviewed CMS documentation and interviewed CMS officials. To evaluate the extent to which CMS has the data it needs to determine an appropriate adjustment, we reviewed CMS documentation and interviewed CMS officials. As part of this effort, we also evaluated CMS’s methodology for a study it used as the basis of its decision to not adjust county per capita Medicare FFS spending rates for VA spending on Medicare-covered services. Our evaluation was based on a review of CMS documentation and an interview with CMS officials. To assess the reliability of the data we used in our analyses, we reviewed related documentation, interviewed knowledgeable officials from CMS and VA, and performed appropriate electronic data checks. This assessment allowed us to determine that the data were reliable for our objectives. We conducted this performance audit from July 2013 to April 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Gregory Giusto, Assistant Director; Christine Brudevold; Christine Davis; Jacquelyn N. Hamilton; Dan Lee; Elizabeth T. Morrison; Christina C. Serna; and Luis Serna made key contributions to this report.
Veterans enrolled in Medicare can also enroll in the VA health care system and may receive Medicare-covered services from either their Medicare source of coverage or VA. Payments to MA plans are based in part on Medicare FFS spending and may be lower than they otherwise would be if veterans enrolled in Medicare FFS receive some of their services from VA. Because this could result in payments that are too low for some MA plans, CMS is required to adjust payments to MA plans to account for VA spending, as appropriate. CMS determined an adjustment was needed for 2017, but not for 2010 through 2016. GAO was asked to examine how VA's provision of Medicare-covered services to Medicare beneficiaries affects payments to MA plans. GAO (1) estimated VA spending on Medicare-covered services and how VA spending affects payments to MA plans and (2) evaluated whether CMS has the data it needs to adjust payments to MA plans, as appropriate. GAO used CMS and VA data to develop an estimate of VA spending on Medicare-covered services. GAO reviewed CMS documentation and interviewed CMS and VA officials. In fiscal year 2010, the Department of Veterans Affairs (VA) health care system provided $2.4 billion in inpatient and outpatient services to the 833,684 veterans enrolled in Medicare Advantage (MA), a private plan alternative to Medicare fee-for-service (FFS). While the Centers for Medicare & Medicaid Services (CMS), an agency within the Department of Health and Human Services (HHS), generally pays Medicare FFS providers separately for each service provided, MA plans receive a monthly payment from CMS to provide all services covered under Medicare FFS. These monthly payments are based in part on a bidding target, known as a benchmark, and risk scores, which are used to adjust the payment amount to account for beneficiary demographic characteristics and health conditions. Both the benchmark and risk scores are calibrated based on Medicare FFS spending. Therefore, VA's provision of Medicare-covered services to veterans enrolled in Medicare FFS likely resulted in lower Medicare FFS spending and, in turn, lower overall payments to MA plans. However, the extent to which these payments reflect the expected utilization of services by the MA population remains uncertain. Specifically, payment amounts may still be too high or could even be too low, depending on the utilization of VA services by veterans enrolled in MA plans and veterans enrolled in Medicare FFS. If, for example, veterans enrolled in MA receive a greater proportion of their services from VA relative to veterans enrolled in Medicare FFS, then the benchmark may be too high. Conversely, payments may be too low if MA-enrolled veterans tended to receive fewer Medicare-covered services from VA relative to veterans enrolled in Medicare FFS. Assessing these possible differences would require data on the services veterans receive from MA. CMS began collecting these data in 2012 but, as of August 2015, had yet to take all the steps necessary to validate the accuracy of the data, as GAO has previously recommended. CMS also lacks data on VA diagnoses and utilization that may improve its methodology for determining if an adjustment to the benchmark is needed to account for VA's provision of Medicare-covered services to veterans enrolled in Medicare FFS. Federal standards for internal control call for management to have the operational data it needs to meet agency goals to effectively and efficiently use resources and to help ensure compliance with laws and regulations. While CMS determined that no adjustment was necessary for 2010 through 2016 based on a 2009 study it performed, CMS's methodology did not account for services provided by and diagnoses made by VA, which can only be identified using VA's data. CMS officials updated the agency's study in 2016 using the same methodology, but with more recent data. CMS officials told GAO that they did not plan to incorporate VA utilization and diagnoses data into their analysis because (1) they do not currently have such data and (2) incorporating these data would introduce additional uncertainty into the analysis. However, if an adjustment is needed but not made or if an adjustment is too low due to limitations with CMS's methodology, it could result in some plans being paid too much and others too little. If CMS does revise its methodology and determines that an adjustment to the benchmark is necessary, it may need to make additional adjustments to MA plan payments, as discussed in this report. CMS should (1) assess the feasibility of revising its methodology for determining if an adjustment to the benchmark is needed by obtaining diagnoses and utilization data from VA and (2) make any additional adjustments to MA plan payments as appropriate. HHS disagreed with the first recommendation, but agreed with the second. GAO maintains that VA data may improve CMS's analysis.
You are an expert at summarizing long articles. Proceed to summarize the following text: Despite their commitment to halve global hunger by 2015, efforts of host governments and donors, including the United States, to accelerate progress toward that goal have been insufficient, especially in sub-Saharan Africa. First, host governments have provided limited agricultural spending, with only eight meeting their 2003 pledge to direct 10 percent of government spending to agriculture. Second, multilateral and donor aid to African agriculture generally declined from the 1980s to around 2005. Third, U.S. efforts to reduce hunger, especially in sub-Saharan Africa, have been constrained by resource and scope limitations. Although African countries pledged in 2003 to direct 10 percent of government spending to agriculture, only 8 out of 38 governments had met this pledge as of 2007, according to the most current available data from the International Food Policy Research Institute. These data represent an increase of four additional countries that met the pledge between 2005 and 2007 (see fig.1.). The primary vehicle for addressing agricultural development in sub- Saharan Africa is the New Partnership for Africa’s Development (NEPAD) and its Comprehensive Africa Agriculture Development Program (CAADP). The African Union (AU) established NEPAD in July 2001 as a strategic policy framework for the revitalization and development of Africa. In 2003, AU members endorsed the implementation of CAADP, a framework that is aimed to guide agricultural development efforts in African countries, and agreed to allocate 10 percent of government spending to agriculture by 2008. Subsequently, member states established a regionally supported, country-driven CAADP roundtable process, which defines the programs and policies that require increased investment and support by host governments; multilateral organizations, including international financial institutions; bilateral donors; and private foundations. According to USAID officials, the CAADP roundtable process is designed to increase productivity and market access for large numbers of smallholders and promote broad-based economic growth. At the country level, host governments are expected to lead the development of a strategy for the agricultural sector, the coordination of donor assistance, and the implementation of projects and programs, as appropriate. As of October 2009, according to a senior USAID official, nine countries had signed CAADP compacts, and five more countries were scheduled for a CAADP roundtable process, which defines programs that are to be financed by host governments and donors. Until recent years, donors had reduced the priority given to agriculture. As a result, the share of official development assistance (ODA) from both multilateral and bilateral donors to agriculture for Africa significantly declined, from about 15 percent in the 1980s to about 4 percent in 2006 (see fig. 2). The decline in donor support to agriculture in Africa over this period is due in part to competing priorities for funding and a lack of results from unsuccessful interventions. According to the 2008 World Development Report, many of the large-scale integrated rural development interventions promoted heavily by the World Bank suffered from mismanagement and weak governance and did not produce the claimed benefits. In the 1990s, donors started to prioritize social sectors, such as health and education, over agriculture. In recognition of the growing global food security problem, in July 2009, the United States and assembled leaders at the G8 Summit in L’Aquila, Italy, agreed to a $20 billion, 3-year commitment to reverse the declining trend in ODA funding for agriculture. U.S. assistance to address food insecurity has been constrained in funding and limited in scope, especially in sub-Saharan Africa. In recent years, the levels of USAID funding for development in sub-Saharan Africa have not changed significantly compared with the substantial increase in U.S. funding for emergencies. Funding for the emergency portion of Title II of Public Law 480—the largest U.S. food aid program—has increased significantly in recent years, while the funding level for nonemergencies has stagnated. In fact, the nonemergency portion accounted for 40 percent of Title II funding in 2002, but has declined, accounting for only 15 percent in 2008. While emergency food aid has been crucial in helping alleviate the growing number of food crises, it does not address the underlying factors that contributed to the recurrence and severity of these crises. Despite repeated attempts from 2003 to 2005, the former Administrator of USAID was unsuccessful in significantly increasing long-term agricultural development funding in the face of increased emergency needs and other priorities. Specifically, USAID and several other officials noted that budget restrictions and other priorities, such as health and education, have limited the U.S. government’s ability to fund long-term agricultural development programs. Also, the United States, consistent with other multilateral and bilateral donors, has steadily reduced its ODA to agriculture for Africa since the late 1980s, from about $500 million in 1988 to less than $100 million in 2006. Launched in 2002, the Presidential Initiative to End Hunger in Africa (IEHA)—which represented the U.S. strategy to help fulfill the MDG goal of halving hunger by 2015—was constrained in funding and limited in scope. In 2005, USAID, the primary agency that implemented IEHA, committed to providing an estimated $200 million per year for 5 years through the initiative, using existing funds from Title II of Public Law 480 food for development and assorted USAID Development Assistance (DA) and other accounts. IEHA was intended to build an African-led partnership to cut hunger and poverty by investing in efforts to promote agricultural growth that is market-oriented and focused on small-scale farmers. IEHA was implemented in three regional missions in sub-Saharan Africa, as well as in eight bilateral missions: Kenya, Tanzania, and Uganda in East Africa; Malawi, Mozambique, and Zambia in southern Africa; and Ghana and Mali in West Africa. However, USAID officials acknowledged that IEHA lacks a political mandate to align the U.S. government food aid, emergency, and development agendas to address the root causes of food insecurity. Although it purported to be a governmentwide strategy, IEHA was limited to only some of USAID’s agricultural development activities and did not integrate with other agencies in terms of plans, programs, resources, and activities to address food insecurity in sub-Saharan Africa. For example, at the time of our review, because only eight USAID missions had fully committed to IEHA, and the rest of the missions had not attributed funding to the initiative, USAID had been unable to leverage all of the agricultural development funding it provides to end hunger in sub-Saharan Africa. This lack of a comprehensive strategy likely led to missed opportunities to leverage expertise and minimize overlap and duplication. For example, both the Millennium Challenge Corporation (MCC) and USDA are making efforts to address agriculture and food insecurity in sub- Saharan Africa, but IEHA’s decision-making process at the time of our review had not taken these efforts into consideration. In addition, IEHA had not leveraged the full extent of the U.S. assistance across all agencies to address food insecurity in sub-Saharan Africa. For example, one of the United States’ top priorities for development assistance is the treatment, prevention, and care of HIV/AIDS through the President’s Emergency Plan for AIDS Relief (PEPFAR), which is receiving billions of dollars every year. The new administration has committed to improving international food assistance by pledging U.S. leadership in developing a new global approach to hunger, and the Secretary of State has emphasized the importance of a comprehensive approach to sustainable systems of agriculture in rural areas worldwide. The U.S. share of the G8 commitment of $20 billion, or $3.35 billion, includes $1.36 billion for agriculture and related programming in fiscal year 2010 to establish food security, representing more than double the fiscal year 2009 budget request level. In our May 2008 report, we recommended that the Administrator of USAID (1) work in collaboration with the Secretaries of State, Agriculture, and the Treasury to develop an integrated governmentwide strategy that defines each agency’s actions and resource commitments to achieve food security, particularly in sub-Saharan Africa, including improving collaboration with host governments and other donors and developing improved measures to monitor and evaluate progress toward the implementation of this strategy and (2) report on progress toward the implementation of the first recommendation as part of the annual U.S. International Food Assistance Report submitted to Congress. USAID concurred with the first recommendation but expressed concerns about the vehicle of the annual reporting. The Departments of Agriculture, State, and Treasury generally concurred with the findings. Consistent with our first recommendation, U.S. agencies have launched a global hunger and food security initiative and, as part of that initiative, are working to develop a governmentwide strategy to address global food insecurity. In April 2009, the new administration created the Interagency Policy Committee (IPC). In late September 2009, State issued a consultation document—a work in progress—that delineates a proposed comprehensive approach to food security based on country- and community-led planning and collaboration with U.S. partners. According to a senior State official, the consultation document was a product of an interagency working group. Although the document outlines broad objectives and principles, it is still a work in progress and should not be considered the integrated governmentwide strategy that we called for in our 2008 recommendation. A comprehensive strategy would define the actions with specific time frames and resource commitments that each agency undertakes to achieve food security, particularly in sub-Saharan Africa, including improving collaboration with host governments and other donors and developing improved measures to monitor and evaluate progress toward implementing the strategy. In prior products, we have identified six characteristics of an effective national strategy that may provide additional guidance to shape policies, programs, priorities, resource allocations, and standards to achieve the identified results. The consultation document outlines three key objectives: (1) to increase sustainable market-led growth across the entire food production and market chain; (2) to reduce undernutrition; and (3) to increase the impact of humanitarian food assistance. State has also identified five principles for advancing global food security strategy, as follows: comprehensively address the underlying causes of hunger and undernutrition, invest in country-led plans, strengthen strategic coordination, leverage the benefits of multilateral mechanisms to expand impacts, and deliver on sustained and accountable commitments. Regarding our second recommendation for annual reporting to Congress on an integrated governmentwide food security strategy, USAID suggested that, rather than the International Food Assistance Report (IFAR), a more appropriate report, such as the annual progress report on IEHA (which is not congressionally required), be used to report progress on the implementation of our first recommendation. USAID officials stated that they plan to update Congress on progress toward implementation of such a strategy as part of the agency’s 2008 IEHA report, which is forthcoming in 2009. A summary of the 2008 IEHA report, released in September 2009, identified three food security pillars—(1) immediate humanitarian response, 2) urgent measures to address causes of the food crisis, and (3) related international polices and opportunities—used to respond to the 2007 and 2008 global food crisis. However, as we concluded in our 2008 report, IEHA neither comprehensively addresses the underlying causes of food insecurity nor leverages the full extent of U.S. assistance across all agencies to fulfill the MDG goal of halving hunger by 2015, especially in sub-Saharan Africa. Finally, in response to a request from Congresswoman Rosa DeLauro, Chair of the House Committee on Appropriations, Subcommittee on Agriculture, Rural Development, Food and Drug Administration, and Related Agencies, we are currently conducting a review of U.S. efforts to address global food insecurity. Report issuance is planned for February 2010. At that time, we plan to report on (1) the nature and scope of U.S. food security programs and activities and (2) the status of U.S. agencies’ ongoing efforts to develop and implement an integrated governmentwide strategy to address persistent food insecurity by using GAO criteria identified in prior products. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions you or other Members of the Subcommittee may have. For questions about this testimony, please contact Thomas Melito at (202) 512-9601 or [email protected]. Individuals who made key contributions to this testimony include Phillip J. Thomas (Assistant Director), Sada Aksartova, Carol Bray, Ming Chen, Debbie Chung, Lynn Cothern, Martin De Alteriis, Mark Dowling, Brian Egger, Etana Finkler, Kendall Helm, Joy Labez, Ulyana Panchishin, Lisa Reijula, and Julia Ann Roberts. International Food Assistance: Key Issues for Congressional Oversight. GAO-09-977SP. Washington, D.C.: September 30, 2009. International Food Assistance: USAID Is Taking Actions to Improve Monitoring and Evaluation of Nonemergency Food Aid, but Weaknesses in Planning Could Impede Efforts. GAO-09-980. Washington, D.C.: September 28, 2009. International Food Assistance: Local and Regional Procurement Provides Opportunities to Enhance U.S. Food Aid, but Challenges May Constrain Its Implementation. GAO-09-757T. Washington, D.C.: June 4, 2009. International Food Assistance: Local and Regional Procurement Can Enhance the Efficiency of U.S. Food Aid, but Challenges May Constrain Its Implementation. GAO-09-570. Washington, D.C.: May 29, 2009. International Food Security: Insufficient Efforts by Host Governments and Donors Threaten Progress to Halve Hunger in Sub-Saharan Africa by 2015. GAO-08-680. Washington, D.C.: May 29, 2008. Foreign Assistance: Various Challenges Limit the Efficiency and Effectiveness of U.S. Food Aid. GAO-07-905T. Washington, D.C.: May 24, 2007. Foreign Assistance: Various Challenges Impede the Efficiency and Effectiveness of U.S. Food Aid. GAO-07-560. Washington, D.C.: April 13, 2007. Foreign Assistance: U.S. Agencies Face Challenges to Improving the Efficiency and Effectiveness of Food Aid. GAO-07-616T. Washington, D.C.: March 21, 2007. Darfur Crisis: Progress in Aid and Peace Monitoring Threatened by Ongoing Violence and Operational Challenges. GAO-07-9. Washington, D.C.: November 9, 2006. Maritime Security Fleet: Many Factors Determine Impact of Potential Limits of Food Aid Shipments. GAO-04-1065. Washington, D.C.: September 13, 2004. United Nations: Observations on the Oil for Food Program and Iraq’s Food Security. GAO-04-880T. Washington, D.C.: June 16, 2004. Foreign Assistance: Lack of Strategic Focus and Obstacles to Agricultural Recovery Threaten Afghanistan’s Stability. GAO-03-607. Washington, D.C.: June 30, 2003. Foreign Assistance: Sustained Efforts Needed to Help Southern Africa Recover from Food Crisis. GAO-03-644. Washington, D.C.: June 25, 2003. Food Aid: Experience of U.S. Programs Suggest Opportunities for Improvement. GAO-02-801T. Washington, D.C.: June 4, 2002. Foreign Assistance: Global Food for Education Initiative Faces Challenges for Successful Implementation. GAO-02-328. Washington, D.C.: February 28, 2002. Foreign Assistance: U.S. Food Aid Program to Russia Had Weak Internal Controls. GAO/NSIAD/AIMD-00-329. Washington, D.C.: September 29, 2000. Foreign Assistance: U.S. Bilateral Food Assistance to North Korea Had Mixed Results. GAO/NSIAD-00-175. Washington, D.C.: June 15, 2000. Foreign Assistance: Donation of U.S. Planting Seed to Russia in 1999 Had Weaknesses. GAO/NSIAD-00-91. Washington, D.C.: March 9, 2000. Foreign Assistance: North Korea Restricts Food Aid Monitoring. GAO/NSIAD-00-35. Washington, D.C.: October 8, 1999. Food Security: Factors That Could Affect Progress toward Meeting World Food Summit Goals. GAO/NSIAD-99-15. Washington, D.C.: March 22, 1999. Food Security: Preparations for the 1996 World Food Summit. GAO/NSIAD-97-44. Washington, D.C.: November 7, 1996. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The number of undernourished people worldwide now exceeds 1 billion, according to the United Nations (UN) Food and Agriculture Organization (FAO). Sub-Saharan Africa has the highest prevalence of food insecurity, with 1 out of every 3 people undernourished. Global targets were set at the 1996 World Food Summit and reaffirmed in 2000 with the Millennium Development Goals (MDG) when the United States and more than 180 nations pledged to halve the number and proportion of undernourished people by 2015. In a May 2008 report, GAO recommended that the Administrator of the U.S. Agency for International Development (USAID), in collaboration with the Secretaries of Agriculture, State, and the Treasury, (1) develop an integrated governmentwide U.S. strategy that defines actions with specific time frames and resource commitments, enhances collaboration, and improves measures to monitor progress and (2) report annually to Congress on the implementation of the first recommendation. USAID concurred with the first recommendation but expressed concerns about the vehicle of the annual reporting. The Departments of Agriculture, State, and Treasury generally concurred with the findings. In this testimony, based on prior reports and ongoing work, GAO discusses (1) host government and donor efforts to halve hunger, especially in sub-Saharan Africa, by 2015, and (2) the status of U.S. agencies' implementation of GAO's 2008 recommendations. Efforts of host governments and donors, including the United States, to achieve the goal of halving hunger in sub-Saharan Africa by 2015 have been insufficient due to a variety of reasons. First, host governments' agricultural spending levels remain low--the most current data available show that, as of 2007, only 8 of 38 countries had fulfilled a 2003 pledge to direct 10 percent of government spending to agriculture. Second, donor aid for agriculture in sub-Saharan Africa was generally declining as a share of overall official development assistance (ODA) until 2005. Third, U.S. efforts to reduce hunger in sub-Saharan Africa were constrained in funding and limited in scope. These efforts were primarily focused on emergency food aid and did not fully integrate U.S. and other donors' assistance to the region. To reverse the declining trend in ODA funding for agriculture, in July 2009, the Group of 8 (G8) agreed to a $20 billion, 3-year commitment. The U.S. share of this commitment, or $3.35 billion in fiscal year 2010, represents more than double the fiscal year 2009 budget request for agriculture and related programming. Consistent with GAO's first recommendation, U.S. agencies are in the process of developing a governmentwide strategy to achieve global food security. In September 2009, State issued a consultation document that delineates a proposed comprehensive approach to food security. Although the document outlines broad objectives and principles, it is still a work in progress and should not be considered the integrated governmentwide strategy that GAO recommended. It does not define the actions, time frames, and resource commitments each agency will undertake to achieve food security, including improved collaboration with host governments and other donors and measures to monitor and evaluate progress in implementing the strategy. Regarding GAO's second recommendation, USAID officials plan to update Congress on progress toward the implementation of such a strategy as part of the agency's Initiative to End Hunger in Africa 2008 report, which is forthcoming in 2009.
You are an expert at summarizing long articles. Proceed to summarize the following text: In the past, we have suggested four broad principles or criteria for a budget process. A process should provide information about the long-term impact of decisions, both macro—linking fiscal policy to the long-term economic outlook—and micro—providing recognition of the long-term spending implications of government commitments; provide information and be structured to focus on important macro trade-offs—e.g., between investment and consumption; provide information necessary to make informed trade-offs between missions (or national needs) and between the different policy tools of government (such as tax provisions, grants, and credit programs); and be enforceable, provide for control and accountability, and be transparent, using clear, consistent definitions. The lack of adherence to the original BEA spending constraints in recent years and the expiration of BEA suggest that now may be an opportune time to think about the direction and purpose of our nation’s fiscal policy. The surpluses that many worked hard to achieve—with help from the economy—not only strengthened the economy for the longer term but also put us in a stronger position to respond to the events of September 11 and to the economic slowdown than would otherwise have been the case. Going forward, the nation’s commitment to surpluses will be tested: a return to surplus will require sustained discipline and difficult choices. It will be important for Congress and the president to take a hard look at competing claims on the federal fisc. A fundamental review of existing programs and operations can create much needed fiscal flexibility to address emerging needs by weeding out programs that have proven to be outdated, poorly targeted, or inefficient in their design and management. Last October, you and your Senate counterparts called for a return to budget surplus as a fiscal goal. This remains an important fiscal goal, but achieving it will not be easy. Much as the near-term projections have changed in a year, it is important to remember that even last year the long- term picture did not look rosy. These long-term fiscal challenges argued for continuation of some fiscal restraint even in the face of a decade of projected surpluses. The events of September 11 reminded us of the benefits fiscal flexibility provides to our nation’s capacity to respond to urgent and newly emergent needs. However, as the comptroller general has pointed out, absent substantive changes in entitlement programs for the elderly, in the long term there will be virtually no room for any other federal spending priorities—persistent deficits and escalating debt will overwhelm the budget.While the near-term outlook has changed, the long- term pressures have not. These long-term budget challenges driven by demographic trends also serve to emphasize the importance of the first principle cited above—the need to bring a long-term perspective to bear on budget debates. There is a broad consensus among observers and analysts who focus on the budget both that BEA has constrained spending and that continuation of some restraint is necessary both in times when near-term deficits are accepted and when we achieve surpluses. These views have been articulated by commentators ranging from Federal Reserve Chairman Alan Greenspan to former CBO Director Robert Reischauer, the Concord Coalition, and President Bush. Discussions on the future of the budget process have primarily focused on revamping the current budget process rather than establishing a new one from scratch. Where the discussion focuses on specific control devices, the two most frequently discussed are:(1) extending the discretionary spending caps and (2) extending the PAYGO mechanism. The Budget Enforcement Act of 1990 (Title XIII of P.L. 101-508) was designed to constrain future budgetary actions by Congress and the president. It took a different tack on fiscal restraint than earlier efforts, which had focused on annual deficit targets in order to balance the budget.Rather than force agreement where there was none, BEA was designed to enforce a previously reached agreement on the amount of discretionary spending and the budget neutrality of revenue and mandatory spending legislation. The law was extended twice. While there is widespread agreement among observers and analysts of the budget that BEA served for much of the decade as an effective restraint on spending, there is also widespread agreement that BEA control mechanisms were stretched so far in the last few years that they no longer served as an effective restraint. In part, recurring budget surpluses undermined the acceptance of the spending caps and PAYGO enforcement. Figure 1 illustrates the growing lack of adherence to the original discretionary spending caps since the advent of surpluses in 1998. The figure shows the original budget authority caps as established in 1990 and as extended in 1993 and 1997, adjustments made to the caps, and the level of actually enacted appropriations for fiscal years 1991 through 2002. As we reported in our last three compliance reports, the amounts designated as emergency spending for fiscal years 1999 and 2000—$34.4 billion and $30.8 billion respectively—were significantly higher than in most past years. In addition to the larger than normal amounts, emergency appropriations in both 1999 and 2000 were used for a broader range of purposes than in most prior years. Emergency spending designations have not been the only route to spending above the discretionary spending caps. For fiscal year 2001 Congress took a different approach—one that also highlights the declining effectiveness of the BEA discretionary spending limits. The Foreign Operations Appropriations Act (P.L. 106-429) raised the 2001 budget authority cap by $95.9 billion, a level assumed to be sufficient to cover all enacted and anticipated appropriations. Also, in January 2001, CBO reported that advance appropriations, obligation and payment delays, and specific legislative direction for scorekeeping had been used to boost discretionary spending while allowing technical compliance with the limits. In 2002, Congress once again raised spending limits to cover enacted appropriations. The Department of Defense and Emergency Supplemental Appropriations Act for 2002 adjusted the budget authority caps upward by $134.5 billion. Nor has PAYGO enforcement been exempt from implementation challenges. The consolidated appropriations acts for both fiscal years 2000 and 2001 mandated that OMB change the PAYGO scorecard balance to zero. In fiscal year 2002, a similar instruction in the Department of Defense and Emergency Supplemental Appropriations Act eliminated $130.3 billion in costs from the PAYGO scorecard. Both OMB and CBO estimated that without the instructions to change the scorecard, sequestrations would have been required in both 2001 and 2002. BEA distinguished between spending controlled by the appropriations process—“discretionary spending”—and that which flowed directly from authorizing legislation—“direct spending,” sometimes called “mandatory.” Caps were placed on discretionary spending—and Congress’ compliance with the caps was relatively easy to measure because discretionary spending totals flow directly from legislative actions (i.e., appropriations laws). As I noted above, there has been broad consensus that, although the caps have been adjusted, they did serve to constrain appropriations. This consensus, combined with the belief that continuing some restraints is important, has led many to propose that some form of cap structure be continued as a way of limiting discretionary appropriations. However, the actions discussed above have also led many to note that caps can only work if they are realistic; while caps can work if they are tighter than some may like, they are unlikely to hold if they are seen as totally unreasonable or unrealistic. If they are set at levels viewed as reasonable (even if not desirable) by those who must comply with them, spending limits can be used to force choices. In the near term, limits on discretionary spending may be an important tool to prompt reexamination of existing programs as well as new proposals. Some have proposed changes in the structure of the caps by limiting them to caps on budget authority. Outlays are controlled by and flow from budget authority—although at different rates depending on the nature of the program. Some argue that the existence of both budget authority and outlay caps has encouraged provisions such as “delayed obligations” to be adopted not for programmatic reasons but as a way of juggling the two caps. The existence of two caps may also encourage moving budget authority from rapid spend out to slower spend out programs, thus pushing more outlays to the future and creating problems in complying with outlay caps in later years. Extending only the budget authority cap would eliminate the incentive for such actions and focus decisions on that which Congress is intended to control—budget authority, which itself controls outlays. This would be consistent with the original design of BEA. The obvious advantage to focusing decisions on budget authority rather than outlays is that Congress would not spend its time trying to control the timing of outlays. However, eliminating the outlay cap would raise several issues—chief among them being how to address the control of transportation programs for which no budget authority cap currently exists, and the use of advance appropriations to skirt budget authority caps. However, agreements about these issues could be reached—this is not a case where implementation difficulties need derail an idea. For example, the fiscal year 2002 budget proposed a revision to the scorekeeping rule on advance appropriations so that generally they would be scored in the year of enactment. Such a scoring rule change could eliminate the practice of using advance appropriations to skirt the caps. The 2002 Congressional Budget Resolution took another tack; it capped advance appropriations at the amount advanced in the previous year. This year the Administration proposed that total advance appropriations continue to be capped in 2003 and the president’s budget assumed that all advance appropriations would be frozen except for those that it said should be reduced or eliminated for programmatic reasons. There are other issues in the design of any new caps. For example, for how long should caps be established? What categories should be established within or in lieu of an overall cap? While the original BEA envisioned three categories (Defense, International Affairs, and Domestic), over time categories were combined and new categories were created. At one time or another caps for Nondefense, Violent Crime Reduction, Highways, Mass Transit and Conservation spending existed—many with different expiration dates. Should these caps be ceilings, or should they—as is the case for highways and conservation—provide for “guaranteed” levels of funding? The selection of categories—and the design of the applicable caps—is not trivial. Categories define the range of what is permissible. By design they limit tradeoffs and so constrain both Congress and the president. Because caps are defined in specific dollar amounts, it is important to address the question of when and for what reasons the caps should be adjusted. This is critical for making the caps realistic. For example, without some provision for emergencies, no caps can be successful. In the recent past it appears that there has been some connection between how realistic the caps are and how flexible the definition of emergency is. As discussed in both our 2000 and 2001 compliance reports, the amount and range of spending considered as “emergency” has grown in recent years. There have been a number of approaches suggested to balance the need to respond to emergencies and the desire to avoid making the “emergency” label an easy way to raise caps. The House Budget Resolution for fiscal year 2002 (H. Con. Res. 83) established a reserve fund of $5.6 billion for emergencies in place of the current practice of automatically increasing the appropriate levels in the budget resolution for designated emergencies. It also established two criteria for defining an emergency. These criteria require an emergency to be a situation (other than a threat to national security) that (1) requires new budget authority to prevent the imminent loss of life or property or in response to the loss of life or property and (2) is unanticipated, meaning that the situation is sudden, urgent, unforeseen, and temporary. In the past others have proposed providing for more emergency spending under any spending caps—either in the form of a reserve or in a greater appropriation for the Federal Emergency Management Agency (FEMA). If such an approach were to be taken, the amounts for either the reserve or the FEMA disaster relief account would need to be included when determining the level of the caps. Some have proposed using a 5- or 10-year rolling average of disaster/emergency spending as the appropriate reserve amount. Adjustments to the caps would be limited to spending over and above that reserve or appropriated level for extraordinary circumstances. Since the events of September 11—and the necessary responses to it— would undoubtedly qualify as such an “extraordinary circumstance,” consideration of new approaches for “emergency” spending should probably focus on what might be considered “more usual” emergencies. It has been suggested that with additional up-front appropriations or a reserve, emergency spending adjustments could be disallowed. No matter what the provision, only the commitment of Congress and the president can make any limit on cap adjustments for emergencies work. States have used this reserve concept for emergencies, and their experiences indicate that criteria for using emergency reserve funds may be useful in controlling emergency spending. Agreements over the use of the reserve would also need to be achieved at the federal level. This discussion of issues in extending the BEA caps is not exhaustive. Previously, we have reported on two other issues in particular—the scoring of operating leases and the expansion of user fees as offsets to discretionary spending. I would like to touch briefly on these. We have previously reported that existing scoring rules favor leasing when compared to the cost of various other methods of acquiring assets.Currently, for asset purchases, budget authority for the entire acquisition cost must be recorded in the budget up front, in the year that the asset acquisition is approved. In contrast, the scorekeeping rules for operating leases often require that only the current year’s lease costs be recognized and recorded in the budget. This makes the operating lease appear less costly from an annual budgetary perspective, and uses up less budget authority under the cap. Alternative scorekeeping rules could recognize that many operating leases are used for long-term needs and should be treated on the same basis as purchases. This would entail scoring up front the present value of lease payments for long-term needs covering the same time period used to analyze ownership options. The caps could be adjusted appropriately to accommodate this change. Most recently this issue has arisen in authority provided to the Air Force to lease 100 Boeing aircraft to be used as tankers for up to 10 years when the underlying need for such aircraft is much longer—in fact, the need would likely encompass the aircraft’s entire useful life. Changing the scoring rule for leases would be in part an attempt to have the rules recognize the long term need rather than the technical structuring of the lease. Many believe that one unfortunate side effect of the structure of BEA has been an incentive to create revenues that can be categorized as “user fees” and so offset discretionary spending—rather than be counted on the PAYGO scorecard. The 1967 President’s Commission on Budget Concepts recommended that receipts from activities which were essentially governmental in nature, including regulation and general taxation, be reported as receipts, and that receipts from business-type activities “offset to the expenditures to which they relate.” However, these distinctions have been blurred in practice. Ambiguous classifications combined with budget rules that make certain designs most advantageous has led to a situation in which there is pressure to treat fees from the public as offsets to appropriations under BEA caps, regardless of whether the underlying federal activity is business or governmental in nature. Consideration should be given to whether it is possible to come up with and apply consistent standards—especially if the discretionary caps are to be redesigned. The Administration has stated that it plans to monitor and review the classification of user fees and other types of collections. The PAYGO requirement prevented legislation that lowered revenue, created new mandatory programs, or otherwise increased direct spending from increasing the deficit unless offset by other legislative actions. As long as the unified budget was in deficit, the provisions of PAYGO—and its application—were clear. During our few years of surpluses, questions were raised about whether the prohibition on increasing the deficit also applied to reducing the surplus. Although Congress and the executive branch both concluded that PAYGO did apply in such a situation—and although the question is moot currently, it would be worth clarifying the point if PAYGO is extended. Last year the Administration proposed—albeit implicitly— special treatment for a tax cut. The 2002 budget stated that the president’s tax plan and Medicare reforms were fully financed by the surplus and that any other spending or tax legislation would need to be offset by reductions in spending or increases in receipts. Ultimately, the Department of Defense and Emergency Supplemental Appropriations Act for 2002 eliminated the need to offset any of the PAYGO legislation by resetting the 2001 and 2002 scorecard to zero. While this action was undertaken for a number of reasons, when surpluses return and Congress looks to create a PAYGO process for a time of surplus, it might wish to consider the kinds of debt targets we found in other nations. For example, it might wish to permit increased direct spending or lower revenues as long as debt held by the public is planned to be reduced by some set percentage or dollar amount. Such a provision might prevent PAYGO from becoming as unrealistic as overly tight caps on discretionary spending. However, the design of such a provision would be important—how would a debt reduction requirement be specified? How would it be measured? What should be the relationship between the amount of debt reduction required and the amount of surplus reduction (i.e., tax cut or direct spending increase) permitted? What, if any, relationship should there be between this calculation and the discretionary caps? While PAYGO constrained the creation or legislative expansion of direct spending programs and tax cuts, it accepted the existing provisions of law as given. It was not designed to trigger—and it did not trigger—any examination of “the base.” Cost increases in existing mandatory programs are exempt from control under PAYGO and could be ignored. However, constraining legislative actions that increase the cost of entitlements and mandatories is not enough. GAO’s long-term budget simulations show that as more and more of the baby boom generation enters retirement, spending for Social Security, Medicare, and Medicaid will demand correspondingly larger shares of federal revenues. Assuming, for example, that last year’s tax reductions are made permanent and discretionary spending keeps pace with the economy, spending for net interest, Social Security, Medicare, and Medicaid consumes nearly three-quarters of federal revenues by 2030, leaving little room for other federal priorities, including defense and education. The budget process is the one place where we as a nation can conduct a healthy debate about competing claims and new priorities. However, such a debate will be needlessly constrained if only new proposals and activities are on the table. A fundamental review of existing programs and operations can create much-needed fiscal flexibility to address emerging needs by weeding out programs that have proven to be outdated, poorly targeted, or inefficient in their design and management. It is always easier to subject proposals for new activities or programs to greater scrutiny than that given to existing ones. It is easy to treat existing activities as “given” and force new proposals to compete only with each other. However, such an approach would move us further from, rather than nearer to, budgetary surpluses. Previously we suggested some sort of “lookback” procedure to prompt a reexamination of “the base” in entitlement programs. Under such a process Congress could specify spending targets for PAYGO programs for several years. The president could be required to report in his budget whether these targets either had been exceeded in the prior year or were likely to be exceeded in the current or budget years. He could then be required to recommend whether any or all of this overage should be recouped—and if so, to propose a way to do so. Congress could be required to act on the president’s proposal. While the current budget process contains a similar point of order against worsening the financial condition of the Social Security trust funds, it would be possible to link “tripwires” or “triggers” to measures related to overall budgetary flexibility or to specific program measures. For example, if Congress were concerned about declining budgetary flexibility, it could design a “tripwire” tied to the share of the budget devoted to mandatory spending or to the share devoted to a major program. Other variations of this type of “tripwire” approach have been suggested. The 1999 Breaux-Frist proposal (S. 1895) for structural and substantive changes to Medicare financing contained a new concept for measuring “programmatic insolvency” and required congressional approval of additional financing if that point was reached. Other specified actions could be coupled with reaching a “tripwire,” such as requiring Congress or the president to propose alternatives to address reforms. Or the congressional budget process could be used to require Congress to deal with unanticipated cost growth beyond a specified “tripwire” by establishing a point of order against a budget resolution with a spending path exceeding the specified amount. One example of a threshold might be the percentage of gross domestic product devoted to Medicare. The president would be brought into the process as it progressed because changes to deal with the cost growth would require enactment of a law. In previous reports we have argued that the nation’s economic future depends in large part upon today’s budget and investment decisions. In fact, in recent years there has been increased recognition of the long-term costs of Social Security and Medicare. While these are the largest and most important long-term commitments— and the ones that drive the long-term outlook—they are not the only ones in the budget. Even those programs too small to drive the long-term outlook affect future budgetary flexibility. For Congress, the president, and the public to make informed decisions about these other programs, it is important to understand their long-term cost implications. A longer time horizon is useful not only at the macro level but also at the micro-policy level. I am not suggesting that detailed budget estimates could be made for all programs with long-term cost implications. However, better information on the long-term costs of commitments like employee pension and health benefits and environmental cleanup could be made available. New concepts and metrics may be useful. We developed them before for credit programs and we need to be open to expanding them to cover some other exposures. I should note that the president’s fiscal year 2003 budget has taken a step in this direction by proposing that funding be included in agency budgets for the accruing costs of pensions and retiree health care benefits. The enactment of the Federal Credit Reform Act in 1990 represented a step toward improving both the recognition of long-term costs and the ability to compare different policy tools. With this law, Congress and the executive branch changed budgeting for loan and loan guarantee programs. Prior to Credit Reform, loan guarantees looked “free” in the budget. Direct loans looked like grant programs because the budget ignored loan repayments. The shift to accrual budgeting for subsidy costs permitted comparison of the costs of credit programs both to each other and to spending programs in the budget. Information should be more easily available to Congress and the president about the long-term cost implications both of existing programs and new proposals. In 1997 we reported that the current cash-based budget generally provides incomplete information on the costs of federal insurance programs. The ultimate costs to the federal government may not be apparent up front because of time lags between the extension of the insurance, the receipt of premiums, and the payment of claims. While there are significant estimation and implementation challenges, accrual-based budgeting has the potential to improve budgetary information and incentives for these programs by providing more accurate and timely recognition of the government’s costs and improving the information and incentives for managing insurance costs. This concept was proposed in the Comprehensive Budget Process and Reform Act of 1999 (H.R. 853), which would have shifted budgetary treatment of federal insurance programs from a cash basis to an accrual basis. There are other commitments for which the cash and obligation-based budget does not adequately represent the extent of the federal government’s commitment. These include employee pension programs, retiree health programs, and environmental clean-up costs. While there are various analytical and implementation challenges to including these costs in budget totals, more could be done to provide information on the long- term cost implications of these programs to Congress, the president, and the interested public. We are continuing to analyze this issue. To affect decision making, the fiscal goals sought through a budget process must be accepted as legitimate. For many years the goal of “zero deficit”— or the norm of budget balance—was accepted as the right goal for the budget process. In the absence of the zero deficit goal, policymakers need an overall framework upon which a process and any targets can be based. When the deficits turned to surpluses, there was discussion of goals framed in terms of debt reduction or surpluses to be saved. As difficult as selecting a fiscal goal in times of surplus is, selecting one today may seem even more difficult. You must balance the need to respond not only to those demands that existed last year—demands kept in abeyance during many years of fighting deficits—but also demands imposed on us by the events of September 11. At the same time—in part because of the demographic tidal wave looming over the horizon—the events of September 11 do not argue for abandonment of all controls. Whatever interim targets Congress and the president agree on, compliance with budget process rules, in both form and spirit, is more likely if end goals, interim targets, and enforcement boundaries are both accepted and realistic. Enforcement is more successful when it is tied to actions controlled by Congress and the president. Both the BEA spending caps and the PAYGO enforcement rules were designed to hold Congress and the president accountable for the costs of the laws enacted each session—not for costs that could be attributed to economic changes or other factors.
The events of September 11 imposed new demands on the federal budget, while pent-up demands from years of fighting deficits remain. In the past, GAO has suggested four broad principles for a budget process. That process should (1) provide information on the long-term impact of decisions, both macro--linking fiscal policy to the long-term economic outlook--and micro--providing recognition of the long-term spending implications of government commitments; (2) provide information and focus on important macro trade-offs--e.g., between investment and consumption; (3) provide information to make informed trade-offs between missions and between the different policy tools of government; and (4) be enforceable, provide for control and accountability, and be transparent, using clear, consistent definitions. New rules and goals will be necessary to ensure fiscal discipline and to focus on long term implications of decisions. The federal government still needs a decision-making framework to evaluate choices between today's and future needs. Amending the current Budget Enforcement Act without setting realistic caps and addressing mandatory programs is unlikely to be successful because the original act used limited actions to achieve a balanced budget. A budget process appropriate for the early 21st century needs a broader framework for thinking about near- and long-term fiscal goals.
You are an expert at summarizing long articles. Proceed to summarize the following text: The Medicare Part D benefit is provided through private organizations that offer one or more drug plans with different levels of premiums, deductibles, and cost sharing. Plan sponsors must offer the standard Part D benefit established under MMA or an actuarially equivalent benefit. The standard benefit includes an annual deductible, coverage up to a level of spending, a coverage gap—the period when beneficiaries pay all of the costs of their drugs—and catastrophic coverage above a specified out-of- pocket limit. Sponsors may also offer enhanced benefit plans that provide a lower deductible and coverage in the coverage gap in exchange for higher premiums. Certain low-income beneficiaries are eligible for subsidies to defray most of their out-of-pocket costs. Part D sponsors offer drug coverage either through stand-alone PDPs for those in traditional fee-for-service Medicare, or through Medicare Advantage prescription drug (MA-PD) plans for beneficiaries enrolled in Medicare’s managed care program. As of September 2007, CMS had contracts with 101 PDPs and 461 MA-PDs. The majority of Part D enrollees, about 71 percent, are in PDPs. PDP enrollment across contracts varies widely, ranging from fewer than 20 enrollees to more than 3.3 million enrollees, and is highly concentrated—the four largest contracts account for about 53 percent of total PDP enrollment in September 2007. For the drugs included on their formularies, Part D sponsors decide which drugs will have utilization management restrictions and which type of restriction they will apply. Utilization management restrictions may include prior authorization, quantity limits, and step therapy requirements. Sponsors may apply utilization management restrictions to prevent the overuse of expensive medications by requiring lower-tier drugs be tried first. The restrictions may also serve to ensure that proper dosages are dispensed, to protect against adverse drug interactions, and to control the use of medications with potential for abuse. Each sponsor has discretion to decide under which circumstances it will apply utilization restrictions. Research conducted for The Kaiser Family Foundation has shown that sponsors’ use of formularies and utilization management restrictions varies significantly. The study reported that the 2007 formularies of the 10 largest PDPs differed in their coverage of a sample of commonly used drugs and their use of utilization management restrictions on those drugs. Four PDPs included on their formulary all of the 152 sampled drugs commonly used by Medicare beneficiaries. Among the remaining 6 PDPs, 1 covered between 90 and 100 percent, and 5 covered between 70 and 80 percent of the sampled drugs. The authors also found that the 10 PDPs placed prior authorization requirements on between 3 and 14 of the 152 sampled drugs. While 3 of the 10 PDPs did not have a step therapy requirement on any of the 152 drugs, 2 PDPs had the requirement on 8 of the drugs. The number of the 152 sampled drugs with quantity limits ranged from 3 to 62. Beneficiaries can use the coverage determination and appeals processes to challenge a utilization management restriction on a drug on the sponsor’s formulary or to request coverage for a Part D drug that is not on the sponsor’s formulary. Table 1 describes types of requests. Study sponsors have designed their coverage determination processes to allow for prompt decision making within CMS-required time frames. They obtain patient information needed to make their decisions using drug- specific coverage determination request forms and enter this information into a computer for analysis of whether coverage criteria have been met. When coverage requests cannot be approved by technical staff, they are decided by clinical staff. Sponsors apply drug-specific coverage criteria that incorporate the requirements established by MMA and CMS as well as factors that they have discretion to apply, such as evidence of trial and failure of lower- cost drugs. In the sample of coverage determination case files we reviewed at the seven study sponsors, coverage of the requested drug was approved in approximately two-thirds of the cases. The sponsors we studied developed coverage determination processes designed to produce decisions within the CMS-required time frames—72 hours for standard requests and 24 hours for expedited requests. To collect the patient information needed to make coverage determination decisions, study sponsors generally rely on drug-specific request forms. These forms typically ask a series of questions based on the sponsor’s established coverage criteria for a given drug. Prescribing physicians are asked to use these forms to submit clinical information about a beneficiary that generally includes the diagnosis associated with the requested drug, and may include the beneficiary’s other medical conditions and drug history. For instance, to process a coverage determination request for the osteoporosis drug Forteo, a sponsor may ask whether the beneficiary has a diagnosis of osteoporosis, has multiple risk factors for fractures, and has tried and failed other specific osteoporosis therapies. Some study sponsors had dozens of different forms for drugs in different classes, with a varying number of questions. For example, one sponsor asked 5 questions for the sleep medications Ambien and Lunesta and 23 questions for the injectible drug Pegasys, used to treat hepatitis. If a physician makes a coverage determination request over the phone, sponsor staff have on-line access to the drug-specific questions they need to ask. With the information submitted by the prescribing physician, study sponsors used computer algorithms—a series of questions with yes/no answers—in order to make expeditious, consistent decisions. Technical staff, such as pharmacy technicians or call center representatives, enter the patient information into the computer system. The algorithms are used to assess the information to determine whether the beneficiary meets the sponsor’s coverage criteria for the specific drug in question. This process generates rapid, consistent decisions if sponsors receive sufficient information from prescribing physicians. When the technical staff cannot approve the drug, coverage determination requests are forwarded for a decision by clinical staff with more expertise, such as staff pharmacists. One sponsor reported that, on average, a standard coverage determination involving prior authorization takes about 40 minutes after the prescribing physician provides the needed information. However, the pressure to make a coverage determination within the CMS- mandated time frames increased the likelihood that sponsors may deny requests when complete information is not at hand or can not be obtained quickly. Two study sponsors told us that if they were not successful in getting information they requested, they made decisions based on the information they had at the time. For example, if physicians are asked to provide a patient’s medical records as part of their request but do not provide that information quickly, the sponsor may deny the request in order to meet the required time frame. Among the coverage determination case files we reviewed at the study sponsors, the sponsor requested additional information from the physician in about 13 percent of the cases and about 30 percent of the denials were for lack of requested medical information. One sponsor noted that there would probably be fewer denials at the coverage determination stage if sponsors had more time to acquire needed information. Sponsors apply a range of coverage criteria to evaluate requests for drugs with restrictions. Their criteria are used, in part, to determine whether a requested drug can be covered under Part D program rules set by MMA or CMS. Sponsors consider a number of factors in reviewing a request, including the following: Should the drug be covered under another part of the Medicare program? There are an estimated 6,000 unique drug products that potentially could be covered under either Part B or Part D of the Medicare program. Which part of the Medicare program is the appropriate payer depends on factors such as the patient’s diagnosis, when the beneficiary is taking the drug, or the setting in which the drug is being administered. For instance, immunosuppressive drugs suppress the body’s immune response and are used to treat autoimmune diseases—diseases in which the body attacks its own tissues—and to prevent rejection of a transplanted organ. Immunosuppressives are covered by Part B when the physician prescribes them after a Medicare-covered organ transplant and by Part D for all other outpatient uses. Is the requested drug in a Part D-excluded drug class? Although sponsors generally can not cover drugs in 1 of 10 statutorily excluded drug categories, beneficiaries or prescribing physicians may request a coverage determination for a drug that is in an excluded drug category. For such coverage determinations, the physician must show that the drug is prescribed for a purpose that is not excluded under the law or that it has been mistakenly classified by the sponsor as excluded. For instance, medications for coughs and colds are generally excluded from Part D. However, CMS has issued guidance to plan sponsors that cough and cold medications are eligible to meet the definition of a Part D drug in clinically relevant situations. For example, if a physician prescribes a cough suppressant to a beneficiary because the beneficiary has osteoporosis and may break a bone if the cough is not controlled, then the cough suppressant would be considered a Part D-covered drug. Is the requested drug medically necessary? Part D sponsors must approve coverage when the requested drug at the requested dosage is medically necessary. In order to show medical necessity, the prescribing physician must provide a statement that the requested drug is medically necessary because (1) all of the covered Part D drugs on the sponsor’s formulary for treatment of the same condition would not be as effective for the beneficiary, would have adverse effects for the beneficiary, or both; (2) the prescription drug alternatives on the formulary have been ineffective in the past, are likely to be ineffective, or are likely to cause an adverse reaction for the beneficiary; or (3) the number of doses available under a quantity limit for a requested drug has been ineffective or is likely to be ineffective. In addition, sponsors are required to approve a tiering exception if they agree with the prescribing physician’s statement that treatment of the beneficiary’s condition using the preferred alternative drug would not be as effective for the beneficiary as the requested drug, would have adverse effects for the beneficiary, or both. Is the requested drug being prescribed for a medically accepted indication? Under Medicare Part D, a drug is considered to be prescribed for a medically accepted indication if the drug is FDA-approved for that use. Any off-label use—one not approved by FDA—is considered medically accepted only if it is supported by a citation in one of the three designated drug reference guides. Beneficiary advocates have argued that the coverage restrictions on those off-label drug uses not listed in the designated drug reference guides cause beneficiaries to be denied coverage for needed drugs, some of which beneficiaries had been previously taking successfully. For instance, a beneficiary without cancer may have a condition which causes severe pain. After trying several medications, the beneficiary may have less pain with the use of Actiq, a medication approved only for breakthrough pain in cancer patients. Under Part D, the beneficiary would be denied coverage for the drug, even if the beneficiary’s physician stated that the medication was medically necessary, because the drug was not prescribed for a medically accepted indication, and this use is not listed in one of the three drug reference guides. Beyond ensuring compliance with MMA and CMS coverage rules, sponsors have discretion to develop their own drug-specific coverage criteria. Sponsors in our study also considered the following factors. Has the beneficiary tried and failed on a generic or preferred alternative drug? To reduce costs, sponsors may require beneficiaries to try and fail on generic or preferred alternative drugs before approving coverage for higher-cost drugs. Sponsors told us, and CMS has affirmed, that beneficiaries generally can switch to a therapeutically equivalent drug without disruption to their care. Therefore, although a beneficiary has been stable on a particular drug for a period of time, sponsors may require the beneficiary to switch to a generic or preferred alternative drug. Has the physician conducted specific tests to confirm the beneficiary’s diagnosis or condition? Study sponsors sometimes also ask for information from specified tests or studies that document a patient’s diagnosis or condition. For instance, one sponsor told us that it requires genotype tests for hepatitis drugs because the length of time a patient should be on the drug is determined by the genotype. Is the beneficiary already stable on the requested drug? Sponsors may consider whether the beneficiary is stable on the requested drug when deciding whether to approve or reapprove coverage. Does the beneficiary have other medical conditions or take other medications that may contraindicate the use of the requested drug? For instance, one sponsor’s criteria for the drug Actiq—used to treat breakthrough cancer pain—stipulated that the enrollee must not have severe asthma or chronic obstructive pulmonary disease, which are contraindications to Actiq. This same sponsor’s criteria for the antidepressant Ensam noted that the medication should not be approved if the enrollee is taking other types of antidepressants, such as monoamine oxidose inhibitors or tricyclic antidepressants. Duration of the approval period depends upon the drug requested and on plan policies. In general, sponsors told us they approve coverage of a requested drug for either the duration of the year or a 12-month period. Some sponsors also approve requests for as long as the beneficiary remains enrolled in the plan in cases where the drug treats an illness that can last for the duration of a person’s life (such as multiple sclerosis). All sponsors said that certain drugs, such as those with a specified length of treatment for safety reasons, may be approved for shorter time periods. For example, some injectible drugs are approved for 24 weeks. If coverage criteria are not met, study sponsors’ denial letters generally included the reason for the decision. For instance, denial notices may state that the requested drug was not covered because the preferred alternative drug must be tried first. Some, but not all, sponsors that we visited sent notification letters to prescribing physicians that identified which preferred drug should be tried. The IRE told us that some sponsor denials are vague. For instance, sponsors may not do a good job of explaining which specific requirements have not been met. Study sponsors approved about 67 percent of the coverage determination requests among the October 2006 requests that we reviewed. Approval rates varied among sponsors, ranging from 57 percent to 76 percent. We also found that coverage determinations in MA-PD plans were more likely to be approved than coverage determinations in PDPs; the approval rate for MA-PD plans was 72 percent, compared to 63 percent for PDPs. Sponsors in our study approved standard requests more often than expedited requests. The approval rates for standard and expedited requests were 67 percent and 53 percent, respectively. We found that nearly all requests for coverage determinations were made by physicians on behalf of their patients. Approximately 94 percent of the coverage determinations in our case file review were requested by a physician or a physician’s office staff. At the coverage determination stage, we also found that only a small proportion of requests were expedited. Of the coverage determination case files we reviewed, just 4 percent of the requests were expedited. We found that the most commonly requested drug class and category combinations were, in order of decreasing frequency, (1) blood modifier agent/hematopoietic, (2) endocrine-metabolic agent/antidiabetic, (3) central nervous system agent/analgesic, (4) dermatological agent/antifungal, (5) gastrointestinal agent/antiulcer, (6) anti-infective agent/antifungal, and (7) musculoskeletal agent/antirheumatic. These seven drug class and category combinations accounted for about half of the requested drugs in the 421 cases we reviewed. At the individual drug level, the five most requested drugs—collectively accounting for about one-quarter of our sampled coverage determination requests—were Procrit, Lamisil, Byetta, Celebrex, and Omeprazole. The appeals process allows for individuals not involved in the previous case review to make better-informed decisions by considering additional supporting evidence. In making redeterminations—the first level of appeal—sponsor staff evaluate any corrected or augmented evidence to see if coverage criteria have been met. In conducting reconsiderations— the second level of appeal—IRE officials consider the information the sponsor reviewed, along with any additional support that may be available. In many cases, appeals result in new interpretations of whether the requested drug should be covered. CMS appeals data show that, from July 2006 through December 2006, the median approval rate across all Part D sponsors was 40 percent; from July 2006 through June 2007, appeals to the IRE received full or partial approval in 28 percent of cases. We found that, for some standard appeals, missing AOR documentation contributed to delays in study sponsor redetermination decisions and dismissals of IRE reconsideration cases. Some study sponsors have developed “workarounds” to eliminate the need for a completed AOR form. Appeals processes at both the study sponsors’ level and the IRE typically involve (1) reviewing more information than was available for the previous decision level and (2) different decision makers. In conducting redeterminations—the first level of appeal—sponsors typically receive corrected or augmented patient information that was not submitted within the allotted time frame for the coverage determination. For example, prescribing physicians may not have identified the beneficiary’s conditions with sufficient specificity or included a complete drug use history when making the coverage determination request; for redeterminations, physicians often provide new information on the reason for the requested drug and a list of drugs the beneficiary had previously tried but were found to be ineffective or not well tolerated. Physicians may forward laboratory test results or chart notes that sponsors had requested previously. In addition, our reviews of sponsors’ redetermination case files showed that physicians revise the statements they had provided originally to address issues raised in the sponsors’ coverage denial letters. To determine whether the sponsor’s drug-specific coverage criteria have been met, study sponsor staff reassess the submitted information, along with any additional support not previously considered. For redeterminations that involve requests for off-label uses of drugs, study sponsors said they make an effort to look for citations in one of the three Part D-designated drug reference guides to see if one of them supports use of the drug for the indication for which it was prescribed. In reviewing requests for dosage limit exceptions, in addition to considering a beneficiary’s medical record, study sponsors may also examine medical research literature for evidence not included in the reference guides. In addition, sponsors may discuss a case directly with the prescribing physician. We found that study sponsors contacted prescribing physicians to obtain additional information in 31 percent of the redetermination case files we reviewed. CMS requires that redetermination decisions be made by individuals not previously involved in reviewing the drug request. Study sponsors’ redetermination decision staff making clinical decisions consist largely of pharmacists or staff medical directors. If the staff pharmacist does not approve a decision, a medical director makes the final decision. CMS additionally requires that decisions concerning the medical necessity of the requested drug be made by a physician with expertise in the field of medicine appropriate to the condition being treated. Some of the study sponsors contract with external physicians or utilization review companies for this function. Along with the information in the sponsor case file, IRE staff review any new supporting information they receive or solicit from the prescribing physician as well as relevant medical literature. In making a reconsideration decision—the second level of appeal—the IRE is likely to have more information than did the sponsor at the first level of appeal. It not only has information from the sponsor’s case file, but also information in the physician’s letter or beneficiary correspondence that may be submitted with the reconsideration request. In addition, IRE staff told us that they contact the physician or beneficiary to obtain specific details about the beneficiary’s health or to clarify the information submitted, such as adverse effects the beneficiary has experienced or contraindications to the preferred formulary drugs. During its review, the IRE may also perform additional research in the drug reference guides on the reason the physician is prescribing a particular drug or dosage. For instance, IRE staff may be successful in researching the Part D-designated drug reference guides for a specific off-label drug use that a sponsor had not identified. As Medicare’s independent external appeals contractor, the IRE employs medical professionals subject to conflict-of-interest prohibitions, which bar them from having certain relationships with any health insurance utilization review company, provider network, or drug supply company. The IRE staff conducting most reconsiderations are predominantly physicians credentialed in various medical specialties. For example, according to IRE officials, appeals cases involving opioids are handled by pain management specialists because these cases need a specialty review. IRE officials also said that, when necessary, the IRE contracts with external specialists to review cases. Consideration of new evidence during the appeals process often leads to decisions that reverse the sponsors’ decisions. At the first level of appeal, CMS appeals data show that, from July 2006 through December 2006, the median approval rate across all Part D sponsors was 40 percent. Across Part D sponsors, approval rates ranged from 0 percent to 100 percent for all appeals during that period. PDP sponsors were somewhat more likely to approve coverage; the median rate of approvals for PDPs was about 45 percent, compared to about 38 percent for MA-PDs. At the second level of appeal, IRE appeals data show full or partial coverage approvals of the requested drug in about a quarter of the 11,679 reconsideration cases decided from July 2006 through June 2007. IRE data for this period show that the IRE either fully or partially approved coverage in 28 percent of appeals and denied coverage in 36 percent of appeals. A significant proportion of IRE cases, 34 percent, were dismissed for various reasons, such as the lack of AOR documentation. (See fig. 1.) The 11,679 cases reviewed by the IRE addressed a variety of issues. From July 2006 through June 2007, about one-third of IRE cases concerned a drug utilization restriction, such as a prior authorization requirement or quantity limit. Another 33 percent of IRE cases were requests for a drug not covered under Part D, such as a drug in one of the 10 Part D-excluded categories. Twenty-eight percent of cases were requests for Part D drugs not on the sponsor’s formulary. The remaining 5 percent of IRE cases involved issues such as requests to pay a lower cost-sharing level and reimbursement for drugs provided outside of the sponsor’s pharmacy network. IRE approval rates for Part D appeals were highest for disputes involving drug utilization restrictions and lowest for cases involving Part D-excluded drugs. The IRE fully or partially approved coverage in 39 percent of the appeals concerning a drug utilization restriction, 30 percent of appeals involving nonformulary drugs, and 18 percent of appeals for coverage of a drug that sponsors denied as an excluded drug under Part D. (See fig. 2.) As part of the decision process, the IRE determines whether the sponsor has met its obligation for coverage under the Part D rules. IRE staff told us that during the first year of the program, some sponsors denied requests because they did not fully consider the beneficiary’s overriding medical need for the requested drug, as CMS requires. In contrast, at the IRE, the beneficiary’s medical condition is the determining factor when the sponsor’s coverage criteria cannot be met. For example, in one case, a sponsor denied a physician’s request for the drug Celebrex—a drug used to treat arthritis and other conditions—because the physician did not provide documentation of the beneficiary’s trial and failure of the sponsor’s formulary medications—Naproxen, Ibuprofen, or Ketoprofen. In this case, the sponsor did not cover the requested drug because its step therapy requirement had not been met. However, in reviewing the case, the IRE applied medical necessity criteria because the prescribing physician stated that use of the sponsor’s preferred formulary alternatives were contraindicated for treatment of his patient’s condition. As a result, the IRE overturned the sponsor’s decision, stating that an exception to the sponsor’s step therapy requirement was warranted and that the sponsor should provide coverage of the drug until the end of the plan year. At our study sponsors and at the IRE, we found evidence that decisions on standard appeals submitted by prescribing physicians—redeterminations and reconsiderations—had been delayed and sometimes dismissed due to missing AOR forms. Without written authorization from the beneficiary, sponsors and the IRE may begin collecting relevant documentation to support a physician-submitted standard request, but they cannot complete their review. Also, the time frame for making the decision does not begin until the completed AOR form is received. According to most study sponsors and the IRE, if they do not receive the signed AOR form within a reasonable amount of time—which ranges from about a week to about a month after receiving the request—they deny or dismiss the request. Of the cases we reviewed at the study sponsors, missing AOR forms generated processing delays in 7 percent of cases. These delays were typically about 14 days, but could stretch to 67 days. At the IRE, missing AOR forms caused dismissals of about 9 percent of appeals, which is about one in every five reconsideration cases that were dismissed. Data on the prevalence of delays in processing redetermination requests attributable to missing AOR forms mask the fact that some sponsors in our study have developed “workarounds” to eliminate the need for a completed AOR form. For example, one sponsor told us it treats all physician appeals as expedited, regardless of the priority level indicated by the physician. Our review of a sample of sponsors’ case files showed that 26 percent of redetermination requests were classified as expedited compared to 4 percent of the coverage determination case files we reviewed. Although expediting requests precludes the need for an AOR form, one sponsor stated that because these requests may not be truly urgent, it may not be in the beneficiary’s best interest for the appeal to be rushed. Expedited appeals allow less time—72 hours versus 7 days—for reviewers to consider the evidence at hand or to request additional information, which might affect the outcome of the appeal. For the case files we reviewed, the denial rate for expedited redeterminations was 73 percent compared with a denial rate of 67 percent for standard redeterminations. In another workaround, sponsors obviate the need to obtain two signatures—the beneficiary’s to appoint the physician to act as a representative and the physician’s to accept the appointment—by arranging for the redetermination request to be made by the beneficiary. For example, one sponsor reported contacting beneficiaries to ask whether they want to initiate the redetermination instead of their physicians, who had contacted the sponsor first. Our case file reviews showed that beneficiaries made requests in about 36 percent of redetermination cases compared to 2 percent of coverage determination cases. This approach was designed to identify those beneficiaries who wish to initiate an appeal rather than having their physician appeal on their behalf, thus reducing the need for the AOR paperwork. Most sponsors in our study and IRE officials reported that the requirement that prescribing physicians be formally appointed beneficiary representatives with a signed AOR form in order to initiate standard appeals is an administrative impediment. The only actions prescribing physicians without explicit authorization cannot take are initiating the appeal, opening discussions with a sponsor or the IRE about an ongoing appeal requested by the beneficiary, or receiving notices of adverse standard redeterminations or reconsiderations. In practical terms, prescribing physicians’ involvement in a standard appeal does not differ significantly whether they are appointed representatives or not. CMS has improved its efforts to inform beneficiaries about sponsors’ performance, but its oversight of sponsors is hindered by poorly defined reporting requirements. CMS publicly reports information on two performance metrics: the rate at which sponsors met required time frames for decision making and the rate at which the IRE concurs with sponsors’ redetermination decisions. In November 2007, for one of these metrics, CMS modified the way it informs beneficiaries by grading sponsors’ performance against absolute benchmarks, rather than relative rankings as it had done previously. To oversee sponsors’ processes, CMS requires that sponsors report data on several coverage determinations and appeals measures; however, the agency provided minimal guidance on the information to be included in each coverage determination measure. As a result, our study sponsors have reported data differently to CMS, hindering the agency’s ability to monitor sponsors’ activities adequately. In its audits of PDP sponsors, CMS found that most of the sponsors it audited were noncompliant with many of the coverage determination and appeals requirements. Using quarterly IRE data, CMS has developed two performance metrics to gauge how well sponsors’ coverage determination and appeals processes are operating. CMS calculates metrics on (1) the rate at which sponsors met required time frames for coverage determinations and redeterminations, as measured by the number of cases, per 10,000 beneficiaries, automatically forwarded to the IRE because of delays in sponsors’ decision making; and (2) the rate at which the IRE concurs with sponsors’ redetermination decisions, as measured by the percentage of cases in which the IRE upheld, or agreed with, sponsors’ coverage denials. CMS officials told us that the agency selected these two performance metrics, in part, because beneficiaries could interpret their meaning easily. CMS includes the two metrics in information made available to the public on the Medicare Prescription Drug Plan Finder—a Web site designed to help beneficiaries compare drug plans. CMS account managers—staff responsible for overseeing sponsors’ performance—review sponsors’ scores on these performance metrics to monitor how well their coverage determination and appeals processes are operating. Sponsors with the highest rates of cases forwarded automatically to the IRE and the lowest percentages of cases in which the IRE agreed with their decisions are viewed as problematic. When a sponsor is identified as an outlier, the assigned account manager contacts the sponsor to discuss its coverage determination and appeal procedures and works with the sponsor to identify ways to improve its performance, such as conducting additional training sessions. Both the IRE and the sponsors in our study noted certain limitations in the data underlying each of these metrics. The number of automatically forwarded cases used for the timeliness metric may understate sponsors’ timeliness. According to IRE officials, some sponsors have forwarded cases to the IRE believing they had exceeded the required decision time frames when they had not. According to the officials, these sponsors automatically forwarded cases when they had not yet received a signed AOR form or a physician statement to support a coverage request. In such cases, the required time frames have not yet expired and the IRE returns the case to the sponsor for processing. Because these sponsors automatically fowarded cases to the IRE inappropriately, their rates of missed time frames are higher than they should be. Another limitation is that the performance metric on the IRE’s concurrence with sponsors’ decisions can be misleading. In discussing this measure with the sponsors in our study, one sponsor commented that a low rate of IRE agreement with their decisions implies, unfairly, that the sponsor’s decisions were flawed. They contend that the IRE often receives additional supporting evidence that results in an overturn, as we found by interviewing IRE officials. They state that had they received the same information within their time frame for processing the case, they may have approved the request. In their view, a low percentage of cases in which the IRE agrees with the sponsor’s decisions does not necessarily mean that the sponsor was not performing well. However, a CMS official asserted that sponsors are responsible for collecting all the information needed to adjudicate a request in the time allotted and are accountable if they do not obtain the same information available to the IRE. CMS uses these performance metrics to inform beneficiaries of sponsors’ performance and to encourage poor performing sponsors to do better. In an effort to improve the information shared with beneficiaries for the 2008 open enrollment period, the agency changed the manner in which it calculates and displays these metrics—using a star designation system. For the 2007 open enrollment period, CMS used 2006 data from the IRE to rank order sponsors’ rates, classify sponsors into groups based on sponsors’ relative performance, and assign a star designation to each group. For example, CMS chose to assign three stars, indicating very good performance, to 90 percent of sponsors for each metric. The next 5 percent of sponsors were assigned two stars, indicating acceptable performance, while the remaining sponsors were given one star, indicating poor performance. By setting the star designations using relative comparisons rather than defined benchmarks for different levels of performance, CMS implied that those sponsors receiving the most stars had superior performance while those with fewer stars were not meeting a CMS-set standard. The clustering of 90 percent of sponsors in the three-star designation could have been misinterpreted by beneficiaries as identifying those sponsors with superior performance when, in fact, by definition, 90 percent of sponsors received three stars. Moreover, the performance of sponsors in the top category varied significantly. For example, among the 26 PDP sponsors receiving three stars, the percentage of cases where the IRE concurred with sponsors’ redetermination decisions ranged from 39 to 75 percent. At the same time, the remaining categories were quite compressed. A relatively small difference in rates could have placed a sponsor in the lowest category rather than the highest category. CMS designated an IRE concurrence rate of 39 percent to be very good performance, but a 36 percent rate as acceptable performance, and 34 percent as poor performance. Recognizing the value of comparing sponsor performance against absolute standards (benchmarks), CMS changed its star designation system in time for the 2008 open enrollment period. For the performance metric on IRE concurrence, the agency now assigns sponsors to one of five star categories using fixed benchmarks rather than a percentile ranking. Table 2 shows how sponsors are assigned to different performance categories for the metric on IRE concurrence. For example, under the new designation system, only those sponsors with IRE concurrence rates better than 95 percent receive five stars, indicating excellent performance. Also, stars are only displayed for sponsors that have at least five appeals cases reviewed by the IRE. For the 2008 open enrollment period, CMS expanded its star designation system for the timeliness metric from three stars to five stars. Although it retained the relative ranking approach, CMS more evenly distributed the sponsors across the star categories. For example, whereas previously CMS assigned the top 90 percent of sponsors—those with the lowest rates of cases forwarded to the IRE because of missed time frames—the highest rating, the agency now assigns the highest rating to the top 15 percent of sponsors. Previously, CMS assigned 5 percent of sponsors the lowest rating, but now it assigns the lowest rating to 15 percent of the sponsors. The remaining sponsors are distributed more evenly across the two-, three-, and four-star designations. CMS continues to include among the top performing sponsors those with no cases forwarded to the IRE due to missed time frames. In our examination of 2006 publicly reported performance data, we found that, among the 60 PDP sponsors receiving three stars for making timely decisions, 21 did not forward any cases to the IRE because of missed time frames. CMS’s oversight of sponsors’ coverage determination and appeals processes include both monitoring and auditing. In monitoring the coverage determination processes, CMS reviews quarterly data reported by sponsors. The coverage determination measures selected for reporting capture information about the extent to which beneficiaries use the coverage determination process and the outcomes of that process. An agency official involved in selecting the measures to be reported noted that CMS sought to minimize the administrative burden on sponsors by selecting measures for which data were likely to be readily available. For 2006, the first year of the Part D program, CMS required sponsors to submit data on the following types of coverage determination cases: the number of requests and the number of approvals for formulary drugs requiring prior authorizations; the number of requests and the number of approvals for formulary exceptions, such as for nonformulary drugs; and the number of requests and the number of approvals for tiering exceptions. CMS used the submitted coverage determination data to calculate an overall request rate and an overall approval rate. In its analysis of the 2006 sponsor-reported data, CMS identified sponsors with relatively high overall rates of coverage requests and low overall rates of approvals. The agency wrote to these sponsors requesting that they confirm whether their submitted data were accurate and not the result of clerical errors. We found that our study sponsors submitted information differently to CMS because the agency provided limited guidance on the information to be included in each coverage determination measure. CMS defined the coverage determination measures sponsors are required to report too broadly, thus allowing each sponsor to use its existing data categorizations for each of the measures. After examining data reported for the third and fourth quarters of 2006, and following up with our study sponsors, we found substantial discrepancies in how sponsors reported these overall data for requests and approvals, as the following illustrate. While four of our seven sponsors said their measure of formulary drug requests requiring prior authorizations included requests for quantity limit exceptions, three sponsors included only a portion or none of these types of cases. For example, one sponsor told us that it omitted 6,032 requests for quantity limit exceptions in reporting the formulary drug request measure in the fourth quarter of 2006. These cases accounted for about 22 percent of the sponsor’s total coverage determination requests during that period. Another sponsor did not include 4,608 requests involving quantity limit exceptions in reporting the formulary drug request measure. These cases accounted for about 25 percent of all its coverage determination requests in the fourth quarter of 2006. Some, but not all, study sponsors included other types of cases in the requests and approvals for formulary drug measures. For example, three of our seven study sponsors included cases disputing coverage under Part B or Part D in their formulary drug measures, and four study sponsors included requests for drugs excluded from coverage under Part D. One of our seven study sponsors stated that, while it included all prior authorization requests in the formulary drug request measure, it included all requests for step therapy and quantity limits in the nonformulary drug request measure, based on a definition for nonformulary drugs in the Medicare Part D manual. In contrast, another sponsor in our study reported in the nonformulary drug category requests for drugs that it inadvertently did not include when designing its open formulary. We identified two sponsors that double counted the number of requested and approved tiering exceptions by reporting them in two different measures. For example, one of our study sponsors included 13,986 requests for tiering exceptions in its count of prior authorization requests for formulary drugs reported to CMS. The inclusion of these tiering exceptions in the number of requests for formulary drugs increased the requests for formulary drugs reported by about 43 percent. For the 2007 contract year, CMS made a number of modifications to its reporting requirements. CMS instructed sponsors to begin reporting data on the number of requests and approvals for quantity limit exceptions measures and renamed the other measures to better convey the types of coverage determinations to include in their reporting. CMS also instructed sponsors to exclude cases related to Part B versus Part D coverage from their data submissions. However, because CMS has yet to address categorization issues, such as whether the measures should be mutually exclusive, sponsors’ data reporting may remain inconsistent. Until data reliability issues are addressed, CMS may not be in a position to use these measures to oversee sponsors’ coverage determination process effectively. In it 2007 compliance audits of five PDP sponsors, CMS found numerous violations of Part D standards. The agency used an audit protocol that examined 13 elements related to the coverage determination process and 13 elements of the appeals processes. CMS auditors reported that the number of violations across sponsors ranged from 15 to 26 specific coverage determination and appeals process requirements. CMS has required sponsors to fix the violations by adopting corrective action plans. Areas of sponsor noncompliance ranged from incomplete written policies and procedures to delays in authorizing drug coverage after the IRE approved an expedited request. Auditors found that some sponsors did not notify beneficiaries of coverage decisions within the required time frames. Several sponsors were cited for not using CMS-approved decision notices; such notices must explain the reasons for denying requests or inform beneficiaries of their appeal rights. Other sponsors did not have policies to use physicians to review appeals of coverage requests denied for a lack of medical necessity. Table 3 shows those audit elements for which CMS found at least four of the five sponsors noncompliant. As of October 2, 2007, each of the five sponsors had submitted to CMS corrective action plans to remediate the identified deficiencies, which CMS was in the process of reviewing. A number of the audit findings indicate that the publicly reported performance metric on sponsor timeliness may not accurately reflect sponsors’ adherence to the requirement to automatically forward cases to the IRE. In reviewing case files, for example, CMS found that sponsors inconsistently forwarded standard coverage determination cases to the IRE when they did not meet the required CMS time frame, with one of the sponsors providing CMS with a written statement acknowledging that it had not forwarded any cases to the IRE for review during the audit period. Another two sponsors inappropriately allowed themselves more time to process certain coverage determination requests by starting their coverage determination review only after they received a supporting statement from the physician. In a separate initiative, CMS has worked with a selected group of sponsors to improve their performance on coverage determinations and appeals. Using a collaborative approach to performance improvement, CMS has conducted evaluations of two sponsors with comparatively high reversal rates at the IRE level of appeal to identify reasons why the IRE often did not agree with these sponsors’ prior coverage decisions. After examining a random sample of IRE case files for each sponsor in 2006, CMS identified several process-related issues that each sponsor could improve and provided feedback in the form of recommendations to each sponsor. For example, at one sponsor, CMS found that in about two-thirds of the reviewed cases, the sponsor should have done a better job of obtaining and assessing documentation of the evidence to support the request. The agency recommended that the sponsor revise certain forms in order to obtain all the information needed to make appropriate coverage determination decisions. CMS officials told us that both sponsors improved their performance by increasing the number of cases in which the IRE agreed with their decisions. As of September 2007, CMS was completing its evaluation of a third sponsor that did not receive a three- star designation for the performance metric based on the 2006 data. In the Part D program, beneficiaries’ access to prescription drugs is a function not only of whether a particular drug is on a plan’s formulary and whether it is subject to utilization management tools, but also how plan sponsors make individualized coverage decisions when requested. The Medicare drug benefit allows sponsors to operate in a regulated but flexible environment. Thus, sponsors in our study follow similar procedural steps but apply discretion in making coverage determinations and appeal decisions. Administrative barriers in the appeals process can have implications for beneficiaries’ drug coverage. Efforts to implement the requirement that prescribing physicians be formally appointed beneficiary representatives with a signed AOR form in order to initiate standard appeals have been cited as an impediment to the appeals process. We found evidence that missing AOR forms have caused delays and some dismissals in cases being considered. A more streamlined approach that reduces AOR paperwork by quickly identifying those beneficiaries who wish to initiate an appeal could improve the process while maintaining physician involvement. While CMS has improved its efforts to inform beneficiaries about sponsors’ performance, its oversight efforts remain mixed. The agency has begun to hold sponsors accountable for maintaining compliance with coverage determination and appeals requirements. Agency auditors cited sponsors for widespread deficiencies and have required them to revise procedures to better serve beneficiaries. However, CMS lacks the data it needs to routinely monitor coverage determination and appeals requests and approvals across all sponsors. The agency has not taken steps necessary to ensure that sponsors report data consistently. To improve the Medicare Part D coverage determination and appeals processes, we recommend that the Administrator of CMS: reduce the need for completed AOR forms by requiring sponsors and the IRE, upon receipt of standard appeal requests submitted by prescribing physicians without completed AOR forms, to telephone beneficiaries to determine whether they wish to initiate the appeal, and ensure that sponsor-reported data used for monitoring coverage determination and appeals activities are accurate and consistent by providing specific data definitions for each measure. In written comments on a draft of this report, CMS remarked that our review presents a balanced evaluation of Part D coverage determination and appeals procedures and the associated data reporting procedures, and does an excellent job of highlighting various challenges in the Part D appeals process. (See app. II.) The agency reported that it is exploring the adoption of one of the report’s recommendations and is in the process of implementing the other. In addition to comments on each of our recommendations, CMS provided detailed, technical comments that we incorporated where appropriate. CMS stated that it intends to consider our recommendation that the need for a signed AOR form be reduced through a process where sponsors call beneficiaries when physicians request appeals on their patients’ behalf. However, it noted that it was not certain whether any change to the current policy could be implemented without modifying the statutory and regulatory provisions associated with the AOR requirement. The agency pointed out that physician representation of beneficiaries is limited by law because only a Medicare Part D eligible individual can bring an appeal at the IRE level. Therefore, CMS said that it is reviewing the current legal requirements about making appeal requests to determine whether changes are appropriate and necessary. CMS added that it intends to work with physician groups to ensure that physicians promptly submit any needed AOR forms. We are pleased that CMS is considering how it can implement our recommendation to address the difficulties regarding the AOR requirement. In making this recommendation, we considered relevant statutory and regulatory provisions and found no limitations that would preclude its adoption by CMS. Our recommendation would reduce the need for AOR forms by requiring that sponsors and the IRE determine at the outset whether beneficiaries want to initiate their appeals or have physicians do so on their behalf. If it is determined that the beneficiary is requesting the appeal, an AOR form would not be needed and the sponsor or IRE could immediately process the request. However, if sponsors or IRE find that beneficiaries want their physicians to initiate the appeal for them, then completed AOR forms would still be required. We have slightly reworded our recommendation, to clarify our intent and eliminate any ambiguity, and included the revised language in the final report. CMS agreed with our recommendation to ensure that sponsor-reported data are accurate and consistent by providing specific data definitions for the coverage determination and appeals measures. The agency noted that it has taken steps to modify the Part D Plan Reporting Requirements guidance on data element definitions. It plans to reinforce this guidance during upcoming calls with Part D sponsors, as well as in memoranda to sponsors, Frequently Asked Questions documents, and conference presentations. In addition, to minimize data entry errors, CMS has implemented data edit rules that will, among other things, reject a value that exceeds an expected range. It also developed procedures for sponsors to correct previously submitted information. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this report. We will then send copies to the Administrator of CMS, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Kathleen King at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix III. In addition to the contact named above, Rosamond Katz, Assistant Director; Lori Achman; Todd Anderson; Hazel Bailey; Krister Friday; Lisa Rogers; and Jennifer Whitworth made major contributions to this report.
Under the Medicare Part D program, prescription drug coverage is provided through plans sponsored by private companies. Beneficiaries, their appointed representatives, or physicians can ask sponsors to cover prescriptions restricted under their plan--a process known as a coverage determination--and can appeal denials to the sponsor and the independent review entity (IRE). GAO was asked to review (1) the processes for sponsors' coverage determination decisions and the approval rates, (2) the processes for appealing coverage denials and the approval rates at the sponsor and IRE levels, and (3) the Centers for Medicare & Medicaid Services' (CMS) efforts to inform the public about sponsors' performance and oversee sponsors' processes. GAO visited seven sponsors that account for over half of Part D enrollment. GAO also interviewed and obtained data from CMS and IRE officials. Sponsors in our study address coverage requests for drugs with restrictions using processes that allow for prompt decisions, apply a range of criteria, and have resulted in approvals of most cases. To minimize the amount of time needed to make a determination, study sponsors use automated systems to compare the patient information they receive from prescribing physicians against preset coverage criteria. The coverage criteria for specific drugs incorporate Medicare requirements--such as whether the drug use is excluded from coverage under Medicare Part D--and discretionary components--such as whether a less expensive alternative drug has been tried and failed. Some study sponsors indicated they feel pressure to make decisions within the CMS-required time frames even when all pertinent patient information from physicians is not at hand. In reviewing a sample of 421 case files, GAO found that overall, study sponsors approved about 67 percent of the coverage determination requests, ranging from 57 percent to 76 percent. The process for conducting appeals allows staff not involved in the previous case review to make better-informed decisions by considering additional supporting evidence. At the first level of appeal, sponsor staff evaluate any corrected or augmented evidence to see if coverage criteria have been met. At the second level of appeal, IRE staff consider the information the sponsor reviewed, along with any additional support that may be available. In many cases, appeals result in new interpretations of whether the requested drug should be covered. CMS appeals data show that, from July 2006 through December 2006, the median approval rate across all Part D sponsors was 40 percent; from July 2006 through June 2007, appeals to the IRE received full or partial approval in 28 percent of cases. For some standard appeals, missing appointment of representative (AOR) documentation contributed to delays in sponsor-level appeals decisions and dismissals of IRE appeals cases. Some study sponsors have developed "workarounds" to eliminate the need for the completed AOR form. CMS has improved its efforts to inform beneficiaries about sponsors' performance, but its oversight of sponsors is hindered by poorly defined reporting requirements. CMS developed two performance metrics on sponsors' timeliness and the outcomes of their coverage decisions. The agency improved the way it displays this information on the Medicare Web site in late 2007. In addition, CMS requires that sponsors report data on various measures of coverage requests and approvals. However, the agency has provided minimal guidance on the types of cases to be included in each coverage determination measure. As a result, our study sponsors reported data differently to CMS, hindering the agency's ability to adequately monitor sponsors' activities. Finally, CMS has conducted several audits and found that sponsors were noncompliant with a number of specific requirements. Areas of sponsor noncompliance ranged from incomplete written policies and procedures to delays in authorizing drug coverage after the IRE approved an urgent request.
You are an expert at summarizing long articles. Proceed to summarize the following text: The WTO administers rules for international trade, provides a mechanism for settling disputes, and offers a forum for conducting trade negotiations. Such negotiations periodically involve comprehensive “rounds,” with defined beginnings and ends, in which a large package of trade concessions among members is developed and ultimately agreed on as a single package. A total of eight rounds have been completed in the trading system’s 56-year history. Each of the last 3 rounds cut industrial nations’ tariffs by about one-third overall. WTO membership has increased since the organization’s creation in 1995 to 146 members, up from 90 contracting parties of the General Agreement on Tariffs and Trade (the WTO’s predecessor) when the Uruguay Round of negotiations was launched in 1986. WTO membership is also diverse in terms of economic development, consisting of most developed countries and numerous developing countries. The WTO has no formal definition of a “developing country.” However, the World Bank classifies 105 current WTO members, or approximately 72 percent, as developing countries. In addition, 30 members, or 21 percent of the total, are officially designated by the United Nations as “least developed countries.” The ministerial conference is the highest decision-making authority in the WTO and consists of trade ministers from all WTO members. The outcome of ministerial conferences is reflected in a fully agreed-upon ministerial declaration. The substance of these declarations is important because it guides future work by outlining an agenda and deadlines for the WTO until the next ministerial conference. The WTO General Council, made up of representatives from all WTO members, implements decisions that members adopt in between ministerial conferences. Decisions in the WTO are made by consensus—or absence of dissent—among all members rather than on a majority of member votes, as it is in many other international organizations. At the fourth ministerial conference in Doha, Qatar, in November 2001, WTO members were able to reach consensus on a new, comprehensive negotiating round, officially called the Doha Development Agenda. The Doha Round is the first round of global trade negotiations since the conclusion of the Uruguay Round in 1994. The Doha Declaration sets forth a work program for the negotiations on agriculture, services, nonagricultural market access, and other issues. In addition, the work program emphasizes the development benefits of trade and the need to provide assistance to developing countries to help them take advantage of these benefits. The Doha Declaration also sets forth a structure and series of interim deadlines for the negotiations. Specifically, it established a Trade Negotiations Committee (TNC) open to representatives from all WTO members to oversee the negotiations, as well as several subsidiary bodies. In addition, it laid out several deadlines and other milestones through the next ministerial conference by which time negotiators were to make decisions on issues under negotiation. In the months following Doha, WTO members agreed that the next ministerial conference would occur in Cancun, Mexico, in September 2003. Figure 1 presents key milestones through the Cancun Ministerial Conference. The Doha Declaration also set several general goals for the next (Cancun) ministerial conference, namely, to take stock of progress at midpoint of the Doha negotiations, to provide necessary political guidance, and to make decisions as necessary. However, at their fifth ministerial conference held in Cancun, Mexico, from September 10 to 14, 2003, WTO ministers were neither able to achieve these goals nor bridge wide differences on individual negotiating issues. They concluded the conference with only an agreement to continue consultations and convene a meeting of the General Council by mid-December 2003 to take actions necessary to move toward concluding the negotiations. The Cancun Ministerial Conference provided an opportunity for both symbolic and practical progress in the Doha Round of negotiations. These opportunities were of heightened importance because negotiators had by their own admission failed to make sufficient progress to meet interim deadlines set out in the Doha Declaration, at least in part because members were awaiting the results of the agricultural reform efforts in the EU. Consequently, real give-and-take did not truly begin until the final weeks before the ministerial, leaving little time to bridge the substantial differences that existed on key issues. The September 2003 WTO Ministerial Conference held in Cancun, Mexico, had symbolic and practical importance for the Doha Round of negotiations. On the symbolic level, several WTO officials we met prior to the meeting noted that the Cancun Ministerial Conference might be a means to regain the momentum needed to bring the Doha Round to a successful conclusion. The Doha Round promised to be the most comprehensive round of global trade negotiations yet, involving a commitment to further liberalize trade, update trade rules, and further integrate developing countries into the world economy. The Cancun Ministerial Conference occurred at roughly the midpoint in the 3-year negotiations. However, based on our meetings with country delegations and WTO officials in Geneva and public statements by WTO officials, on the eve of the ministerial there was a sense true negotiations had not really begun. In particular, although WTO member governments had succeeded in actively submitting and discussing many proposals to achieve the general goals laid out at Doha, they had been less successful in narrowing their differences on these proposals or coming up with workable plans for developing specific national commitments (or schedules) to lower trade barriers. WTO members held differing views on the symbolic importance of the Cancun Ministerial Conference. For instance, U.S. and some other member country officials, as well as WTO officials, expressed hope that the Cancun Ministerial Conference would create the political will to achieve a meaningful and ambitious agreement by the deadline that would benefit all participants. WTO officials we spoke with, for example, stressed that Cancun needed to provide a “boost” of fresh momentum to the flagging talks. Other members planned to use the meeting to focus on the centrality of agriculture reform. However, some members downplayed the symbolic importance of the ministerial and viewed it merely as an opportunity to take a mid-point assessment of the negotiations. At a practical level, Cancun was viewed as critical to provide negotiators with direction in key areas that had thus far eluded consensus, according to WTO and member country officials. With just 16 months before the agreed-upon deadline of January 1, 2005, for concluding the negotiations, working-level progress in resolving outstanding issues was effectively stalled. Breaking the logjam hinged upon receiving clear ministerial direction in several key areas. For example, guidance was needed on the specific goals and methods that would be used to liberalize trade in agriculture. Progress on narrowing substantive differences in advance of the Cancun ministerial proved slow. As late as July 2003, observers and participants in the negotiations noted that WTO members were simply restating long-held positions on key issues and had yet to engage in real negotiations. For instance, in July 2003, the WTO Director General said that negotiators had been waiting to see what others are willing to offer without showing flexibility themselves. The chairmen of some of the negotiating groups repeated this sentiment in their statements to the July meeting of the Trade Negotiations Committee. (See app. II for a discussion of significant events in the WTO negotiations before and during the Cancun Ministerial Conference.) A key factor hindering the progress of Doha Round talks had been the pace and extent of reform of the EU’s Common Agricultural Policy (CAP). Agriculture was considered by many WTO members to be a linchpin to achieving progress in all other areas of the Doha negotiating agenda. After considerable internal debate, on June 26, 2003, the EU agreed to CAP reform. Among other things, the reform would ensure that for many agricultural products, the amount of subsidy payments made to farmers would be independent from the amount they produce. Yet even after the EU CAP reform was announced, other members stated that they were still waiting to see the EU’s internal reform translated into a significantly more ambitious WTO negotiating proposal. The EU resisted making a new WTO proposal, arguing that in effect it was being forced to pay for reform twice by reforming its internal policy once and then being asked by WTO negotiators to reform again to be able to conclude an agreement. Another factor hindering overall progress was perceived linkages between various negotiating topics.The Doha Round’s outcome is to be a “single undertaking,” meaning a package deal involving results on the full range of issues under negotiation such as agriculture, services, and nonagricultural market access. As a result, trade-offs are expected to occur among issues to accomplish an overall balance satisfactory to all members. Thus, it is difficult to make progress on one issue without achieving progress on other issues. For example, many developing nations consider agriculture their number one priority and have been unwilling to make offers to open up their services markets until they see more progress on agricultural reform. On the other hand, the EU and Japan, who expect to make concessions on agriculture, wanted a commitment at Cancun to begin negotiations on several issues that were new to the trading system--investment, competition (antitrust), government procurement, and trade facilitation— which are collectively known as Singapore issues. By our mid-July meetings in Geneva it was clear that expectations for Cancun were being scaled back because of the overall lack of progress. Instead of issuing “modalities,” (numerical targets, timetables, formulas, and guidelines for countries’ commitments), for example, WTO officials and country representatives we met with suggested that “frameworks,” or more general guidance on what types of concessions each participant would make, might be a more appropriate goal for Cancun. In other words, instead of ministers agreeing on some specific target, such as “all nations will cut tariffs by one-third,” they would agree to something more general, such as all nations are expected to cut tariffs by a certain method and with the following kinds of results (e.g., substantially liberalizing trade and reducing particularly high tariffs). The negotiations began to make some progress at the end of July, when trade ministers from a diverse group of approximately 30 WTO members met in Montreal, Canada, to discuss the status of the negotiations. During this meeting, ministers encouraged the United States and the European Union to provide leadership in the negotiations by narrowing their differences on the key issue of agriculture. The United States and the European Union agreed to do so, and in August they presented a joint framework on agriculture. In addition, in late August, the General Council removed a potential obstacle to progress at the Cancun ministerial by approving an agreement involving implementation of the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) and public health declaration adopted in Doha. The Doha TRIPS and public health declaration directed WTO members to find a way for members with insufficient pharmaceutical manufacturing capacity to effectively use the flexibilities in TRIPS to acquire pharmaceuticals to combat public health crises. U.S. and WTO officials and representatives from other WTO members we met with had identified this as an important symbolic issue for the WTO as an institution, especially for WTO members from Africa. They had urged its prompt resolution to create a more favorable climate for the Cancun ministerial meeting. Despite resolving the TRIPS issue and attaining some movement on agriculture in the final weeks before the Cancun ministerial, differences persisted on other key issues in the negotiations on the eve of the meeting. The Cancun Ministerial Conference failed to resolve substantive differences on key issues: agriculture (including cotton), the “Singapore issues,” market access for nonagricultural goods, services, and development issues that included special and differential treatment for developing countries. Key countries’ principal positions were far apart, and certain aspects of each issue were particularly contentious. Although many looked to the Cancun ministerial to provide direction that would enable future progress, it ultimately ended without resolving any of the members’ wide differences on these issues. Agriculture is central to the Doha Round of trade negotiations, both in its own right and because many WTO members say that progress on other negotiating fronts is not possible without significant results in agriculture. The Doha Declaration calls for negotiations to achieve fundamental agricultural reform through three “pillars” or types of disciplines (rules): (1) substantially improving market access; (2) reducing, with a view to phasing out, all forms of export subsidies (export competition); and (3) substantially reducing trade-distorting domestic support (subsidies). Additionally, the declaration imposed two interim deadlines on WTO agriculture negotiators: a March 31, 2003, deadline for establishing modalities (rules and guidelines for subsequent negotiations), and a deadline to submit draft tariff and subsidy reduction commitments at the Cancun meeting. Negotiators missed both deadlines. As a result, the goal for the Cancun ministerial was to adopt a framework and set new deadlines for subsequent work on the three main pillars of the agriculture negotiations. The delay in EU CAP reform, as well as the 2002 U.S. Farm Bill, which was projected to increase U.S. agricultural support spending complicated resolution of these issues. Many WTO members felt this bill undermined the relatively bold negotiating stance the United States assumed in the WTO, which called for making substantial reductions in trade-distorting domestic support and tariffs. Various countries or groups of countries differ in their objectives for the agriculture negotiations. The Cairns Group of net agriculture exporting countries and the United States envisioned an ambitious agricultural liberalization agenda. The United States proposed a two-phase process to reform agriculture trade in the WTO. The first phase of the proposal would eliminate export subsidies and reduce and harmonize tariff and trade- distorting domestic support levels over a five-year period. The second phase of the proposal is the eventual elimination of all tariffs and trade- distorting domestic support. Other developed country members such as the EU, Japan, Korea, and Norway favored a more limited agenda. This group and several other small developed countries argued for flexibility to maintain higher tariffs in order to protect their domestic agriculture production. Finally, many developing countries wanted a reduction in developed country agriculture subsidies and market access barriers while, at the same time, wanting less ambitious obligations to liberalize their own market access barriers. Domestic support. Arguing that such programs resulted in lower world prices and displacement of their producers from global markets, many developing countries forcefully pressed the developed countries to make significant cuts to their trade-distorting domestic support programs, particularly the United States and the European Union, which in 1999 totaled $16.9 billion and 47.9 billion euros ($45 billion at 1999 exchange rates), respectively. Although they agreed in principle on the desirability of reducing trade-distorting subsidies, both the United States and the European Union resisted further disciplines on their abilities to support domestic agriculture in ways that present WTO rules consider to be non- trade distorting. For example, they opposed calls to cap and reduce subsidies that are not currently subject to spending limits under the WTO. The EU argued that its CAP reform already addressed developing country demands by making domestic support payments independent of production, in principle making the payments less trade distorting, even though total expenditures will not be lowered. However, several WTO members indicated that the reforms were not ambitious enough. In addition, the United States said that it would not reduce its domestic support for agriculture unless other members, namely the EU, made cuts that substantially reduced the wide disparities in allowed trade-distorting domestic support. The United States also demanded that developing countries provide something in return for cutting subsidies, such as lowering their tariffs on U.S. exports. Market access. The United States viewed attaining additional market access as an important objective in the negotiations. U.S. and Cairns Group negotiators proposed a harmonizing formula for tariff reduction known as the Swiss formula that would subject the higher tariffs to larger cuts. Other members, including the EU, Japan, and Korea, favored an across-the-board average cut and a minimum cut per product (tariff line). As illustrated in figure 2, this approach would generally result in less liberalization than if the harmonizing formula were used. Many developing countries, and the Cairns Group, proposed substantially less liberalizing developing country tariff reductions, in part to counter continued use of subsidies in developed countries. Finally, according to their official statements, numerous smaller developing countries emphasized the importance of trade preferences to, and the negative effects that erosion of trade preferences would have on, smaller, more vulnerable economies. Export competition. The United States, the Cairns Group, and many developing countries wanted to eliminate export subsidies for agricultural products. The EU, the primary employer of export subsidies, envisioned a substantial reduction and elimination of export subsidies for certain products but not a total elimination. It also tied any cuts in export subsidies to the adoption of stricter disciplines on U.S. food aid and export credits. Like the United States, the EU also sought stricter disciplines on export state-trading enterprises. As previously noted, the United States and the European Union had responded to calls to provide leadership by narrowing their differences on the three pillars of agricultural reform before the Cancun meeting. In a mid- August framework, the U.S. and the EU proposed reductions in trade- distorting domestic agricultural support, with those members with higher subsidies making deeper cuts and a three-pronged strategy to reduce agricultural tariffs. With respect to export subsidies, the framework eliminated export subsidies for some agricultural products and committed members to reduce budgetary and quantity allowances for others. Reaction to the framework was negative and swift, in part because it implied less ambitious reductions in domestic support and market access barriers than the original U.S. proposal, which U.S. officials emphasize is still on the table, and did not completely eliminate export subsidies. For example, within a week a newly formed group of developing countries, commonly referred to as the Group of 20 (G-20) for its 20 members, presented a counter framework that implied deeper cuts in domestic agricultural subsidies by developed countries, a tariff reduction formula that allowed developing countries to make less substantial cuts, and the total elimination of export subsidies. The draft ministerial declaration presented to ministers in late August contained elements of both proposals. Although extensive discussions on agriculture did occur at Cancun, they ultimately failed to bridge the substantial gaps that remained. Sharp divisions remained on the extent to which the developing countries should be required to open their markets and whether it was possible to eliminate all export subsidies. On domestic support, divisions remained concerning the extent of cuts in trade-distorting domestic support and the question of whether additional disciplines on non trade-distorting support were desirable. Furthermore, the prominence of the G-20 of developing countries relative to the more diverse Cairns Group at the meeting imposed a North-South dynamic on the agriculture negotiations. Specifically, several developed countries criticized the G-20’s negotiating tactics, including their failure to offer market access concessions such as tariff cuts in exchange for substantial cuts in developed country subsidies and their demands for a long list of changes to the Conference Chairman’s draft text, even though very little time remained to negotiate. Meanwhile, representatives from the G-20 argued that the developed country proposals and framework offered very modest gains and maybe even some steps backward in efforts to liberalize world agricultural trade. In addition to the three main agricultural pillars that were the agreed focus of the Doha agriculture negotiations, the Sectoral Initiative in Favour of Cotton put forward by four West and Central African countries figured prominently in the Cancun ministerial discussions. The initiative was added to the ministerial agenda in the weeks leading up to Cancun and does not appear in the Doha Declaration. The proposal by these cotton exporting countries singled out three WTO members--the United States, the European Union, and China--as the primary cotton subsidizers. They claimed that these subsidies were driving down world prices and that many of their farmers no longer found it profitable to produce cotton, a concern given their contention that cotton plays an essential role in their development and poverty reduction efforts. The cotton initiative’s guidelines called for immediately establishing a mechanism at Cancun to eliminate all subsidies on cotton and a transitional mechanism to compensate farmers in cotton-producing least developed countries (LDC) that suffered losses in export revenue as a result of cotton subsidies. Specifically, the proposal called for reducing all cotton support measures by one third annually for 3 years, thereby eliminating all support for cotton by year-end 2006. In addition, the proposal stipulated that any cotton-subsidizing WTO member would be a potential contributor to a proposed transitional compensation mechanism. The transitional compensation mechanism would last up to 3 years. The sectoral initiative did not specify the total amount of compensation to be paid but cited a recent study that the direct and indirect losses for the 3 years—1999 to 2002—were $250 million and $1 billion, respectively, for the countries of West and Central Africa. The cotton initiative was discussed at length in Cancun; however, there was no resolution. The reason for the failure was that certain members had difficulty supporting a transitional compensation mechanism within the context of the WTO and saw the issue of cotton as hard to separate from the larger agricultural agenda. U.S. efforts to respond to the region’s immediate concerns on cotton by broadening the original initiative made little headway, despite some evidence that falling world cotton prices were also attributable to other factors such as competition from manmade fibers. The failure to resolve the cotton initiative to the satisfaction of the developing countries had a negative impact on the overall tone of the Cancun meeting, because certain developing countries viewed the issue as a litmus test for the WTO and thought the proposed response fell far short of addressing their pressing needs. The issue also took on symbolic importance, becoming a political rallying point for a number of countries’ frustrations. The Doha Declaration established a deadline for deciding how to handle negotiations aimed at adding four new issues, called the Singapore issues, to the global trading system. The four Singapore issues are investment, competition (antitrust), transparency (openness) in government procurement, and trade facilitation (easing cross-border movement of goods). According to the draft ministerial text presented to ministers before Cancun, ministers were to decide by explicit consensus the basis for starting actual negotiations on these issues, or to continue exploratory discussions on them. However, the wording of the Doha Declaration left unclear what was to specifically occur in Cancun. Certain members thought the declaration implied that formal negotiations were to begin in Cancun and that the only issue for Cancun was the type of negotiation. Others thought the declaration implied that formal negotiations could only begin if there were explicit consensus among the members at Cancun to do so. Key players’ positions were divided into three main camps. A group of developed and developing country members led by the European Union, Japan, and South Korea strongly advocated starting negotiations on all four issues, including investment and competition, which were particularly controversial. These nations had succeeded at Doha in getting the four issues included as part of the round’s overall package but only on the condition that explicit agreement be reached at Cancun on the parameters to negotiate these issues. Many developing countries, on the other hand, had consistently expressed their strong opposition to the inclusion of the Singapore issues in the WTO negotiating agenda and several viewed Cancun as their opportunity to block negotiations on these issues. For example, India argued that for many of these countries, undertaking new obligations in these areas would have presented too great a burden, since they were still having difficulty implementing their Uruguay Round obligations. They also were not convinced of the development benefits that would result. A third group of countries, including the United States and some developing nations, were willing to negotiate but wanted each issue considered on its own merit. However, some of the developing countries linked their willingness to negotiate with progress in other areas such as agriculture. The United States had been pushing the issues of transparency in government procurement and trade facilitation. The United States was also willing to negotiate on competition policy and investment, but had some concerns that included whether negotiations could call into question its enforcement of strong antitrust laws and match the high standards that are a feature of its bilateral investment agreements. The discussions at Cancun on the Singapore issues were contentious and contributed to the breakdown of the ministerial. Early in the week, a group of 16 developing countries argued that because there was no clear consensus on the modalities for the negotiations as required by the Doha Declaration, the matter of whether to add these four new issues to the negotiations should be dropped from the Cancun agenda and moved back to Geneva for further discussion. The draft text issued later that week called for beginning negotiations on two issues and setting deadlines for trying to reach agreement on possible bases for addressing the other two issues. This text was discussed on the last day of the conference, but in the end, compromise on this divisive subject proved impossible. Lowering barriers to market access of nonagricultural goods was also an important point of contention leading into the Cancun ministerial. The Doha Declaration stated that negotiations on nonagricultural market access should be aimed at reducing or, as appropriate, eliminating tariffs for nonagricultural products, including reducing or eliminating tariff peaks and tariff escalation, as well as nontariff barriers. The Doha Declaration also said that the liberalization of nonagricultural goods should take fully into account the principle of special and differential treatment for developing countries, including allowing for “less than full reciprocity” in meeting tariff reduction commitments. Because WTO members missed a May 31, 2003, deadline for reaching agreement on modalities for nonagricultural market access that would govern preparation of national schedules of barrier-cutting commitments, the goal for Cancun was to establish a “framework” or basic approach to tariff and nontariff barrier liberalization that would then be supplemented by more detailed modalities later. Even though there are important differences in the situations and individual positions of various developing countries—a fact the United States likes to emphasize--WTO members were largely divided along North- South lines in nonagricultural market access talks going into the Cancun meeting. The United States and other developed countries were pushing for substantial cuts in tariffs and wanted the high overall tariffs of key developing countries like India and Brazil to come down. For example, India has an average bound tariff of 34 percent on nonagricultural products, while China and Côte d’Ivoire have average bound tariffs of 10 percent or less. The United States also aimed to seek a high level of ambition in opening markets and expanding trade for all countries through a harmonizing formula that cuts tariffs in all countries. In addition, it wanted to reduce wide disparities among members’ tariffs as well as reduce low tariffs. Publicly, the developing countries were fairly united in saying that any liberalization needed to leave them sufficient flexibility to address their special needs and should involve greater cuts by richer countries than poor ones. In May 2003, the chairman of the negotiating group on market access issued a “chair’s proposal,” attempting to reconcile WTO members’ various positions, including on tariff cutting formulas, sectoral liberalization, and special and differential treatment. Coming into Cancun, two major proposals for cutting tariffs--one from the market access chairman and another from the United States, EU, and Canada--were under active discussion, though all of the numerous original proposals submitted by WTO members remained “on the table.” These two proposals differed in the type of mathematical formula that would be used to determine how much each member would be expected to reduce its tariffs. The proposed tariff formula developed by the chairman as a compromise would largely differentiate among countries according to their current overall average bound tariff rate. Specifically, a country with higher average bound tariffs would have to reduce its bound tariffs at a lesser rate than a country with lower average bound tariffs. To use an illustrative example, Brazil, with higher overall bound rates to begin with, would have to cut a 10 percent bound tariff on a particular product to approximately 7.5 percent, or by 25 percent. Malaysia, with lower overall bound tariffs, would have to slash a 10 percent bound tariff to 6 percent, or by 40 percent (see fig. 3). Proponents argue that this formula would recognize each country’s differing starting points for liberalization while still accomplishing significant cuts in bound tariff rates. Some officials counter that average bound tariffs are not a direct or good indicator of development status or needs. Moreover, they expressed concern that this formula would require more reduction from nations that have lower overall bound tariffs. The United States was concerned that this would effectively punish countries that have previously liberalized, while rewarding countries that had not liberalized. In addition, the United States was concerned that this proposal was based on average bound tariff rates, which would not necessarily lead to lower applied rates. Many developing countries’ bound tariff rates are higher than the tariffs they currently apply. For example, Brazil has an average bound tariff of 31 percent and a 15 percent average applied rate. Real liberalization will only occur if countries reduce bound tariffs to below currently applied rates. On the other hand, the United States, the European Union, and Canada developed an alternative framework for negotiations. This framework calls for all countries to use a single harmonizing formula, such as a Swiss formula, where the coefficient of reduction does not depend on a country’s average bound tariff rate. For example, if a Swiss formula using a coefficient of 8 were used, all countries would have to cut a 10 percent tariff on a particular product to 4 percent. Nevertheless, the U.S., EU, Canada framework does foresee some differentiation among countries. For example, it suggested that countries could be rewarded for “good behavior” by giving credits to countries that commit to do things that are considered sound trade policy, such as putting a ceiling on, or binding, a high percentage of their tariffs. According to U.S. Trade Representative (USTR) officials, the credits would allow them to lower tariffs by a lesser amount than that implied by the formula. Developing countries, however, say this approach is inconsistent with the Doha mandate, which states developing countries as a whole will be allowed to make lesser commitments. In addition, they fear that they would have to cut tariffs much more than developed countries in absolute terms. As a result, just prior to the Cancun meeting, a few nations such as India reasserted their interest in an across-the board or linear approach to cutting tariffs on nonagricultural goods, similar to that depicted in figure 2. Under a linear approach, all tariffs would be cut at the same rate and therefore the results would not be harmonizing. The discussions at Cancun never got into the detailed proposals that had been debated before Cancun and failed to bridge these gaps on tariff formulas. At Cancun, WTO members were also considering the complete elimination of tariffs in to-be-agreed-upon sectors, including ones that are particularly important to developing countries. However, the issues of choice of sectors and participation in the elimination remained controversial. Many developing countries wanted sectoral elimination to be voluntary. Also under debate was whether sectoral elimination should result in zero tariffs, harmonization, or a differentiated outcome for developed versus developing countries. The United States and many other countries thought that sectoral initiatives were an important way to supplement the general tariff cutting formula and to achieve their ambitious liberalization objectives. The United States wanted to make sure all countries competitive in a given sector would participate in sectoral elimination regardless of their level of development. Consistent with the Doha mandate, WTO members were also considering special treatment for developing countries and new entrants such as recently acceded members in implementing their tariff commitments. This included longer periods to implement the tariff reductions, differentiation in how sectoral initiatives would be applied, and not making reduction commitments mandatory. The developed countries recognized that many nations, particularly least developed and other vulnerable economies, need flexibility to deal with sensitive sectors and other adjustment needs. However, they opposed across-the-board flexibility for all developing countries, including the more advanced ones. At Cancun, some steps were taken to address the inherent trade-off between committing to ambitious tariff liberalization and retaining flexibility. The World Bank and the International Monetary Fund, for example, provided assurances that they were prepared to work with developing nations to help offset lost tariff revenue and address concerns related to erosion of preferences. Nevertheless, ministers did not resolve the debates over tariff-cutting formulas, the mandatory nature of sectoral elimination, and the degree of flexibility to accord to developing countries. Progress was not made on these issues because progress was not made or expected in agriculture nor on the Singapore issues. The Doha Declaration set a deadline for WTO members to complete the work they had initiated in January 2000 to further open services markets under the General Agreement on Trade in Services. In contrast with agriculture and industrial market access, the services group had already agreed on how to conduct these talks, which are under way. The goal for Cancun—particularly for the United States—was to energize the ongoing services negotiations and to set a deadline for submission of improved offers to lower barriers to services. According to a WTO official, only 38 (counting the EU as one member) of the WTO’s 146 members had submitted offers before the Cancun ministerial. Although 18 of these offers were from developing countries, as defined by the World Bank, many large developing countries such as India, South Africa, Egypt, and Brazil had not submitted offers. Some of these nations, as well as others such as Argentina, China, and Mexico had their own market access ambitions, including further easing of the temporary movement of their services suppliers across national borders. Services negotiations regained some momentum before Cancun due to two important events. First, the language contained in the draft Cancun Ministerial Declaration incorporated several of the demands from developing countries such as the need to conclude negotiations in rule- making in areas such as emergency safeguard measures for services. Second, the adoption of modalities on September 3, 2003, for the special and differential treatment of LDCs was expected to boost the participation of LDCs in the services negotiations. However, little progress was made in the services negotiations at Cancun because advances on other issues under negotiation, especially in agriculture, were needed in order to enable further movement. Many developing countries were greatly concerned about receiving special treatment in the form of making lesser commitments in ongoing global trade talks and receiving assistance in implementing existing WTO agreements. Global trade rules have long included the principle that developing countries would be accorded special and differential treatment consistent with their individual levels of development, including the notion that they would not be expected to fully reciprocate tariff and other concessions made by developed countries. In the Doha Declaration, WTO members agreed that all special and differential treatment provisions in existing WTO agreements should be reviewed with a view to strengthening them in order to make them more precise, effective, and operational. The declaration requires the WTO’s Committee on Trade and Development to identify those special and differential treatment provisions that are mandatory and those that are nonbinding and to consider the legal and practical implications of turning the nonbinding ones into mandatory obligations. According to USTR officials, part of the continuing difficulty of this work has been the problems of separating work on special and differential treatment from the work underway in actual individual negotiating groups (e.g., agriculture) and the lack of progress on related issues such as graduation/differentiation, which is also part of the Committee on Trade and Development’s work programme. Also, as part of the Doha Declaration, WTO members committed themselves to address outstanding implementation issues and set a December 2002 deadline for recommending appropriate action on them, but they missed that deadline. Although there was agreement on a number of implementation issues at Doha, outstanding issues remain in areas like trade related investment measures, anti-dumping rules, and textiles. These issues have proved divisive, even among developing countries. At Cancun, ministers were asked to endorse and immediately implement a subset of the numerous proposals for special and differential treatment as well as to set a new deadline for resolving outstanding special and differential treatment and implementation issues. For some developing countries, progress on these issues at Cancun was key to their willingness to negotiate further market liberalization in other areas. In addition, the African Group in particular wanted to better ensure that the needs of the WTO’s poorest member countries would be satisfactorily addressed in the overall package of Doha Round results. However, developed and developing countries fundamentally disagreed in their interpretation and use of special and differential treatment. For example, government officials from several developed countries echoed their desire to better target special and differential treatment by adopting a needs-based approach. According to these officials, special and differential treatment provisions should be tailored to match the various levels of development and the particular economic needs of developing countries. Many developing countries, on the other hand, wanted an expansion of special and differential treatment. Their expansionist ambition was reflected in 88 proposals for additional special treatment obligations, mostly from the African Group and the group of least developed countries. Among other things, the proposals sought additional technical support and called for an exemption for developing countries and LDC members from requirements to comply with existing WTO obligations that they believed would be prejudicial to their individual development, financial, or trade needs or beyond their administrative and institutional capacity. Developed countries and more advanced developing countries considered many of these demands to be problematic because some changes proposed would alter the balance of the Uruguay Round agreements. In the end, however, developed countries and some developing countries appeared ready to move forward on some of these proposals at Cancun, had the ministerial proved successful. The General Council Chairman worked carefully with a diverse group of key countries to put this package together. A total of 24 special and differential treatment proposals, including some related to implementation issues, were included in the draft Cancun Ministerial Declaration sent to Cancun from Geneva. An additional three proposals were added during the course of the Cancun meeting. While some developing nations argued that these proposals were of little economic value and felt agreeing to these proposals at Cancun would create a false sense of progress, other developing countries were willing to accept the package in return for assurances of future advances. As for implementation issues, discussions on developing country proposals in this area were overshadowed at Cancun by another issue--a push by the EU and other European countries to secure greater recognition and protection of geographical indications (place names) for specialty agricultural products. Many countries, including the United States, Australia, New Zealand, and some Latin American nations, strongly resisted, because they produce and market products under widely used terms such as “Champagne” and “Roquefort cheese” that the European nations were seeking to protect and monopolize. In the end, no agreement was reached at Cancun on special and differential treatment or on implementation issues. Despite a full ministerial agenda of issues requiring resolution, the only actual decision taken relating to the negotiations at Cancun was that the WTO’s General Council should meet by December 15, 2003. The closing session on Sunday, September 14, adopted a short ministerial statement expressing appreciation to Mexico for hosting the talks, welcoming Cambodia and Nepal to the WTO, and stating that participants had worked hard to make progress in the Doha mandate but that “more work needs to be done in some key areas to enable us to proceed toward the conclusion of the negotiations.” To achieve this, the concluding ministerial statement directed officials to continue working on outstanding issues with a renewed sense of urgency and purpose. The failure to make progress in resolving the major substantive issues at Cancun left the Doha Round in limbo and resulted in a major setback that will make attaining an overall world trade agreement by January 1, 2005, more difficult, according to WTO Director General Supachai and key WTO member country representatives. Specifically, no further negotiating sessions have been scheduled, although informal efforts to get the talks back on track have continued. The Cancun ministerial declaration directed the Chairman of the General Council to coordinate this work and to convene a meeting of the General Council at the senior officials level no later than December 15, 2003 “to take the action necessary to move toward a successful and timely conclusion of the negotiations.” However, on December 9, WTO General Council Chairman Perez del Castillo notified the heads of delegation that there was a lack of “real negotiation” or “bridging of positions” in the informal talks. Because he believed insufficient convergence had occurred to take “necessary action to conclude the round,” he presented a Chair’s report outlining key issues and possible ways ahead. He also recommended that all negotiating bodies be reactivated in early 2004, after new chairs are chosen. The December 15, 2003, General Council meeting generally accepted this recommendation, according to the chairman’s closing remarks. According to government officials, trade negotiations observers, authoritative reports, and GAO observations and analysis, several other factors contributed to the Cancun meeting’s collapse. The ministerial agenda was complex, and unwillingness by some nations to work with the text presented by the General Council Chairman hampered progress. In addition, the large number of participants and emerging coalitions influenced the meeting’s dynamic. Competing visions and goals for the Doha Round, particularly between developed and developing countries, and a high-profile initiative on cotton, fueled North-South tensions. Meanwhile, the WTO’s cumbersome decision-making process did not lend itself to building consensus. The agenda for Cancun was not only complex, it was also overloaded. This situation was due to the stalemate that had characterized the Doha Round up to Cancun, in which the negotiators had missed virtually all self- imposed deadlines. The Doha Declaration already had specified that certain items were to be on the agenda for the next (Cancun) ministerial, such as deciding how to handle negotiations on the Singapore issues (see fig. 4). But as interim deadlines came and went without agreement, other issues were added to the Cancun agenda. Although the goal of reaching agreement on these issues for achieving trade liberalization had eluded negotiators during the previous 22 months of work in Geneva, they proposed to reach agreement on all of them in Cancun, even though they had just 5 days to do so. Adding to the complexity of the task, the Cancun ministerial began without an agreed-upon text as a starting point for discussion. In late August, the General Council Chairman issued a revised draft ministerial declaration. This version included draft frameworks for modalities for agriculture, nonagricultural market access, and the Singapore issues. These draft frameworks still included multiple bracketed items (items to be agreed upon) and lacked specific details in several areas. However, not all WTO members agreed to use this draft as the basis for ministers’ discussion in Cancun. Efforts to produce a new text of a ministerial declaration from which to work took considerable time at Cancun. The first 3 days of the 5-day conference were devoted to formal and informal meetings. The Conference Chairman, the Mexican Foreign Minister finally presented a draft text at a meeting on the fourth day of the 5-day conference (September 13). Just 30 hours remained until the scheduled close of the conference, yet ministers needed 6 hours to study the new text. The meeting to obtain reactions to the text took another 6 hours. More than 115 nations spoke, one after the other, with most ministers criticizing various points of the draft and repeating well-established positions. A WTO spokesman later reported that the only consensus evident that night was that the text was unacceptable to many WTO members. The U.S. Trade Representative advocated moving forward when he took the floor about halfway through the meeting. He expressed willingness to work with the draft, urged a collective sense of responsibility, and warned fellow trade ministers that they should not let the perfect become the enemy of the good. Certain other members such as Sri Lanka, Uruguay, Chile, and China were among the few other countries that made positive statements. After another several hours of critical interventions, however, the Conference Chairman closed the meeting, expressing concern that with less than 15 hours remaining, members did not appear to be willing to reach a consensus. A WTO spokesperson later reported that they could see a clear problem emerging because differences in positions were hardening. Achieving consensus at Cancun was a very complex undertaking due to the large number of participants and the emerging coalitions that affected the meeting’s dynamics. Participants in the WTO talks at Cancun included 146 members with vastly different economic interests, levels of development, and institutional capacities. Moreover, the number of delegates at Cancun was substantially larger than the number of delegates at the Doha ministerial, which occurred shortly after September 11, 2001. Nongovernmental organizations (NGO) were also participating. The 1,578 registered NGO participants included business as well as a range of public interest (labor, environment, consumer, development, and human rights) groups, and both were active in seeking to influence the negotiations. For example, NGOs, such as the development advocacy group Oxfam, underwrote the literature being distributed on the cotton initiative, and poverty relief organization Action Aid’s press release immediately called the Conference Chairman’s draft text “a stab in the back of poor countries.” The emergence of two developing country coalitions also affected the dynamics of the Cancun meeting. Brazil was widely seen as the leader of the G-20 group of developing countries pressing for bigger cuts in developed country agricultural subsidies. The United States and the European Union, traditionally at odds over agriculture, complained that the group was engaged in confrontational tactics that were more directed at making a point than at making a deal. However, the group claimed that it took a businesslike and professional approach to the negotiations and had succeeded in highlighting the centrality of agricultural reform to the Doha Round’s success. Another strong coalition that emerged in Cancun was a group of 92 countries made up of the African, Caribbean, Pacific (ACP) African Union /LDC countries. This group’s main objective was to ensure that the WTO’s poorest countries’ interests were taken into account. In the end, their views were decisive, as their refusal to accept negotiations on the Singapore issues and other members’ insistence to negotiate these issues triggered the Conference Chairman’s decision to end the ministerial. In addition to a complex agenda and volatile meeting dynamics, the participants appeared to have competing visions of what the round had promised. Noting that the negotiations were titled the “Doha Development Agenda,” developing countries still expected that the talks would focus primarily on their needs. For many, this meant progress on agriculture, while others stressed meaningful accommodation of their special needs. U.S. officials, on the other hand, told us that they would like to see further differentiation of the as-yet-undefined term “developing countries.” Some U.S. officials told us that developing countries’ reluctance to open their markets is contrary to sound development policies, because lowering trade barriers is pro-, not anti-development. Moreover, various studies had shown that a significant share of the estimated economic benefits of the Doha Round would be due to an expansion of trade between developing countries as they reduced their trade barriers to each other’s goods. As the days of the ministerial wore on without consensus, frustrations increased. The developed nations accused the developing countries of grandstanding and of not making an effort to reach agreement. Officials from some developed countries complained, for example, that developing countries had not approached the negotiations in the spirit of reciprocity but instead were focused on making demands without expecting to make concessions. In essence, developing countries were not seen as negotiating in good faith. Developing countries also felt frustrated and believed that the lack of progress in the negotiations was due to an absence of political will by the developed countries to fulfill the promises at Doha. For example, developing countries believed that the developed countries had not offered enough on agriculture, the issue that many developing countries cared about the most. The differences in expectations are illustrated in reactions to the cotton initiative, which served as a focal point for concerns about developed country agriculture subsidies. The WTO Director General personally urged ministers to give the matter full consideration and held consultations with the interested parties in an attempt to forge a compromise. While the African proponents believed that agreement on this issue would have been a sign of good faith, the United States viewed the request for monetary compensation as inappropriate and better suited to a development assistance venue. When the Conference Chairman issued his draft text, many countries reacted negatively to the proposed compromise on cotton. Brazil, speaking on behalf of the G-20, referred to the proposal as totally insufficient. The Chairman’s text did not mention the elimination of subsidies but instead suggested that West African countries diversify out of cotton. The fact that the cotton initiative is one of the four key issues that the General Council Chairman has focused on after the ministerial, along with agriculture, industrial market access, and the Singapore issues, demonstrates its continued importance. Finally, certain participants have also cited the WTO’s cumbersome process for achieving consensus as contributing to the collapse of the talks. The WTO operates by consensus, meaning that any one participant opposing an item can block agreement. In the EU Trade Commissioner’s closing press conference in Cancun, he expressed frustration that there was no reliable way within the WTO to get all 146 member nations to work toward consensus. Relatively few formal meetings involving all members actually occurred in Cancun, although plenary sessions and working groups took place. Moreover, formal negotiating sessions involving all members were not conducive to practical discussion or to achieving consensus. Instead, they often involved formal speeches. As a result, small group meetings were used to obtain frank input and conduct actual negotiations. Although efforts were made to keep the whole membership involved through daily heads of delegations meetings, certain members expressed a sense of frustration and confusion as epitomized by indignation by some members at the subjects being discussed during the green room meeting on the last day. The Conference Chairman’s decision to make the controversial Singapore issues, and not agriculture, the first and last item for discussion on the last day of the ministerial conference caused a backlash by a group of developing countries that ultimately precipitated the meeting’s collapse. As opposed to the day-to-day negotiations, which are overseen in Geneva by the Director General acting as the head of the TNC and by the General Council Chairman, WTO ministerial conferences are unusual in that the Conference Chairman is the only person with the power to call and adjourn meetings, to invite participants, and to choose the topics for discussion. At Cancun, after the heads of delegations meeting the night before, the Chairman decided, after consulting with certain ministers, that he needed to see if there was any way to reach consensus on the Singapore issues, which seemed to him to be intractable. As a result, he convened a closed- door meeting of about 30 ministers broadly representative of the whole WTO membership on the morning of the final day of the conference to discuss them. According to reports, the EU representative reiterated at the beginning of this final, closed-door meeting his long-standing position that all four Singapore issues must be negotiated. Some developing countries, on the other hand, opposed starting negotiations on those issues. As the meeting progressed, the EU agreed to drop two (investment and competition), maybe even three (government procurement), of the Singapore issues—leaving trade facilitation on the table. This EU concession reportedly prompted some traditional opponents such as Malaysia and India to show some flexibility. The Chairman then recessed the meeting and asked the ministers to confer with other ministers who were not present in the “green room” to see whether there was consensus to negotiate on at least one of the Singapore issues. During the break, at a meeting of the African, Caribbean, Pacific (ACP), LDC, and African Union members, many of the ministers present voiced surprise and indignation over the sequencing of topics under discussion in the closed-door meeting. They were upset that the Singapore issues were being discussed rather than agriculture. The Singapore issues were seen as rich members issues, while agriculture and cotton resonated with the poorer countries. Finally, members of the ACP/African Union/LDC coalition believed that no deal was better than a bad deal, and a deal on the Singapore issues in the absence of any agreement on agriculture or the cotton initiative was deemed a bad deal. As one country member rhetorically asked during the debate— “What are we taking home for the poor? We must say no.” When the 30-country meeting reconvened, Botswana reported the decision of the ACP countries to the group, indicating that they could not accept negotiation on any of the Singapore issues, including trade facilitation, because “not enough was on the table.” According to reports, Korea, on the other hand, said it could not accept dropping any of the Singapore issues. The Conference Chairman then said that consensus could not be reached and decided to close the conference without agreement on any issue. At a press briefing later that afternoon after the collapse of the talks, the Chairman explained that he had begun with the Singapore issues because of the dissent voiced on that issue during the meeting the night before. He further explained that he had decided to end the ministerial because it was clear to him that consensus could not be reached. Some countries, including certain EU member states and some developing countries, however, complained about what they saw as a precipitous decision to end the talks. The Cancun Ministerial Conference highlighted the challenge of meeting the high and sometimes competing expectations created at Doha of both developing and developed countries, particularly with respect to negotiations on critical agricultural issues. While the issue has been contentious for many years, the Cancun experience demonstrates that forward movement on agriculture is central to the possibility of making further progress in the Doha Development Round. Although the Cancun meeting ended because of the lack of consensus on negotiating the Singapore issues, what many developing nations wanted from the developed world were concessions on agriculture, in particular dramatic reductions in export subsidies and domestic support. At this point, it is difficult to predict how the setback at Cancun will ultimately affect the Doha Development Round negotiations. There are some signs that both developed and developing countries are rethinking their positions. The United States and the European Union have shifted away from taking an active leadership role, but have recently signaled some willingness to engage in further negotiations. Although a number of G-20 members have abandoned the group or made statements undercutting its unanimity of views, the group’s founders still appear intent to play a leadership role in pushing for global agriculture reform. While progress remains possible, political events scheduled to occur over the next year may add uncertainty to the negotiating process. For example, in the United States, the 2004 presidential and congressional elections are looming, and protectionist pressures are rising along with the U.S. trade deficit. Elections in Europe and in one of the largest developing countries, India, may also have an impact on the negotiations. Finally, how WTO members handle long-simmering disputes on such topics as corporate tax subsidies and steel could also affect the negotiating climate. In this regard, President Bush’s recent decision to lift safeguard tariffs on steel may be viewed as an important development. As we have noted in previous reports, the WTO has often found it difficult to achieve consensus and bridge its members’ strongly held, disparate views on politically sensitive issues, in part because it is an ever-growing, more complex, and diverse organization. Various devices, such as interim deadlines, were put in place for the first stage of Doha negotiations to redress these significant organizational challenges, but they fell short of achieving desired progress. The WTO Director General and General Council Chairman have been given the green light to work with WTO members to narrow differences on key issues in hopes that they can still salvage an agreement by the January 1, 2005, deadline. However, the failure to achieve substantive progress by mid-December casts further doubt. One important consideration is that the delay in WTO negotiations could intensify momentum for concluding bilateral, subregional, or regional trade agreements. This has already happened in the United States, which, though remaining engaged in the WTO, has recently concluded three such agreements (Chile, Singapore, and Central America), is currently conducting negotiations on three others (Australia, Morocco and Southern African Customs Union), and has committed to begin negotiations on five others (Dominican Republic, Bahrain, Thailand, Panama, and the Andean region) as well as the 34-nation Free Trade Area of the Americas. Additional possibilities are in the wings. The effect that a proliferation of these kinds of agreements would have on the WTO is unclear. We requested comments on a draft of this report from the U.S. Trade Representative, the Secretary of Commerce, the Secretary of Agriculture, and the Secretary of State, or their designees. USDA’s Foreign Agricultural Service agreed with our report’s factual findings and analysis. Commerce’s Deputy Assistant Secretary for Agreements Compliance provided us with technical oral comments on the draft, which we incorporated into the report as appropriate. The Secretary of State declined to comment on our report. The U.S. Trade Representative provided formal comments (see app. IV), indicating that many of the issues identified in GAO’s analysis are consistent with the U.S. assessment of issues that must be addressed to put negotiations back on track in 2004. He stressed the United States is ready to exercise leadership provided other countries are prepared to negotiate meaningfully. The Assistant U.S. Trade Representative for WTO and Multilateral Affairs and other USTR staff also provided us with oral comments. While agreeing with much of the report’s information, they provided a number of factual and technical comments, which we incorporated as appropriate. In addition, USTR staff expressed some concern that the overall tone of the report placed too much emphasis on the importance of the Cancun ministerial itself and on the North-South divide, particularly given the meeting’s mandate from Doha and individual country positions. While we stand by the overall balance struck in our report, we did add some information to reflect the diversity within developing country ranks evident on certain issues. We are sending copies of this report to interested congressional committees, the U.S. Trade Representative, the Secretary of Agriculture, the Secretary of Commerce, and the Secretary of State. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4347. Additional GAO contacts and staff acknowledgments are listed in appendix V. The Chairman of the Senate Committee on Finance and the Chairman of the House Committee on Ways and Means asked us to analyze (1) the overall status of the World Trade Organization’s (WTO) negotiations on the eve of the WTO’s ministerial conference at Cancun, Mexico, in September 2003; (2) the key issues for the Cancun Ministerial Conference and how they were dealt with at Cancun; and (3) the factors that influenced the outcome of the Cancun Ministerial Conference. We followed the same overall methodology to complete the two first objectives. From the WTO, we analyzed the 2001 Doha Ministerial Declaration and related documents, the July and August versions of the draft Cancun Ministerial Declaration, and other speeches and proposals from WTO officials, as well as some negotiation proposals from WTO members. From the WTO, U.S. government agencies, and foreign country officials, we obtained background information regarding negotiating proposals and positions. We met with a wide variety of U.S. government and private sector officials, foreign government officials, and WTO officials. Before the Cancun ministerial, we met with officials from the Office of the U.S. Trade Representative (USTR) and the U.S. Departments of Commerce, Agriculture, and State. We also met with officials from the Grocery Manufacturers of America and the Pharmaceutical Researchers and Manufacturers of America. In addition, we met with representatives from developed and developing countries in Washington, D.C., including Australia, Malaysia, Brazil, and Costa Rica. Further, we traveled to the WTO’s headquarters in Geneva, Switzerland, where we met with WTO officials and member country representatives from developed and developing countries, including Australia, Canada, the European Union (EU), Japan, Brazil, China, Malaysia, Mexico, and India. To analyze the factors that influenced the outcome of the Cancun ministerial, we attended the Cancun Ministerial Conference in Mexico in September 2003. In Cancun, we attended USTR congressional briefings and went to press conferences and meetings open to country delegates. Also, we reviewed domestic and international news media reports; news releases on the developments at the ministerial conference and statements about the outcome of the ministerial conference from the WTO, the U.S. and foreign governments, and other international organizations. and the final tariff rate resulting from the negotiations is t. The expression, which relates the two tariff rates, where c is a constant parameter, would be: t× a ------------- t× B t×------------------------ tt is the final rate, to be bound in ad valorem terms t is the base rate t is the average of the base rates B is a coefficient with a unique value to be determined by the participants. For purposes of our analysis, we assumed a coefficient of 1 would be used for all countries. However, the Chair’s proposal does not specify the value of coefficient and leaves open the possibility that a different coefficient could be used. with would differ for countries with average tariffs of 4 percent, 15 percent, and 30 percent. We selected the United States, Malaysia, and Brazil as examples of countries that respectively fit into those categories on the basis of WTO annual World Trade Report data on average overall bound tariff rates. We performed our work from June to October 2003 in accordance with generally accepted government auditing standards. Trade Negotiations Committee (TNC) meets The Chairman of the TNC, which had been established to oversee the Doha Round of global trade talks, reported that while the work of the TNC and its subsidiary bodies intensified in 2003, real negotiations had not yet begun. WTO General Council Chairman prepares draft ministerial declaration The text is intended as a first draft of an operational text through which ministers at Cancun would register decisions and give guidance and instruction in the negotiations. It reflects a lack of progress on key issues, as shown by its skeletal nature and the bracketed (disputed) items relating to “modalities” (rules and guidelines for subsequent negotiations) for agriculture, nonagricultural market access, and the Singapore issues (investment, competition , government procurement, and trade facilitation). Montreal mini-ministerial occurs Approximately 30 trade ministers from WTO members meet in Montreal to prepare for the Cancun Ministerial Conference. At the meeting, the ministers encourage the United States and the EU to narrow their differences on the central issue of agriculture. U.S. and EU submit joint agriculture framework The framework includes reductions in domestic support, with those members with higher subsidies making deeper cuts, a three-pronged strategy to reduce tariffs, and reduction of export subsidies. Group of 20 Developing countries submit agriculture counterproposal The proposal includes substantial cuts in domestic subsidies by developed countries, a tariff reduction formula that allows developing countries to make less substantial cuts, and the elimination of export subsidies. General Council Chairman and WTO Director General submit revised draft ministerial declaration Now 23 pages, the text continues to reflect significant differences between members on many issues. It includes frameworks for modalities in agriculture and nonagricultural market access as well as proposed modalities on each of the Singapore issues. Additionally, it includes a section related to a proposal by Burkina Faso, Benin, Chad, and Mali to eliminate cotton subsidies and provide compensation to the four countries while the subsidies are phased out. General Council approves Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) and public health solution WTO members complete discussions mandated in Doha to make it easier for poorer countries to import cheaper generic drugs made under compulsory licensing if they are unable to manufacture the medicines themselves. The United States, previously the only member preventing an agreement, joins the consensus after the General Council Chairman provides a statement regarding WTO members’ shared understanding of the interpretation and implementation of the decision. Day 1 of Cancun Ministerial Conference Mexican President opens the ministerial conference, and ministers start work on key issues. The Conference Chairman appoints ministers to facilitate discussions on key issues—agriculture, nonagricultural market access, development issues, Singapore issues, and other issues. Ministers also debate a proposal on cotton from four African members. Day 2 The first informal heads of delegation meeting occurs, and the Director General is appointed to facilitate discussions on the cotton initiative. Group discussions also take place on agriculture, nonagricultural market access, the Singapore issues, development issues, and other issues. Day 3 A second informal heads of delegation meeting occurs in the morning and includes reports by the facilitators on each issue. Working group meetings continue throughout the day and conclude with a heads-of-delegation meeting at night. The Conference Chairman commits to draft a new version of the ministerial text and circulate it by the middle of the following day. Day 4 The Conference Chairman distributes a new draft ministerial text at a meeting with heads of delegations and then asks them to study the text and reconvene in the evening. After ministers reconvene, many criticize the draft text, arguing that their particular concerns have not been included. At the close of the meeting, the Conference Chairman warns ministers that if the ministerial conference fails, the negotiations might take a long time to recover. Day 5 The Conference Chairman begins closed-door consultations with 30 ministers representing a wide range of regional and other groups on the subject of the Singapore issues. During these consultations, positions shift, allowing the possibility of dropping two or possibly three of the issues. The Conference Chairman then suspends the meeting to allow participants to meet with their respective groups. When they return, there is no consensus on three, and the Conference Chairman decides to close the ministerial conference. Ministers subsequently approve a ministerial statement that instructs members to continue working on outstanding issues and to convene a meeting of the General Council by December 15 to take necessary action. Developing country status in the WTO brings certain rights. For example, provisions in some WTO agreements provide developing countries with the right to restrict imports to help establish certain industries, longer transition periods before they fully implement agreement terms, and eligibility to receive technical assistance. See article XVIII of the General Agreement on Tariffs and Trade (GATT), articles IV, XII, and XXV of the General Agreement on Trade in Services, and articles 66 and 67 in the Agreement on Trade-Related Aspects of Intellectual Property Rights. In addition, developing countries may benefit from the Generalized System of Preferences, under which developed countries may offer nonreciprocal preferential treatment (such as zero or low duties on imports) to products originating in those developing countries the preference-giving country so designate. See Decision on Differential and More Favourable Treatment, Reciprocity and Fuller Participation of Developing Countries, adopted under GATT in 1979. “lower middle income,” $736 - $2,935; “upper middle income,” $2,936 - $9,075; and “high income,” $9,076 or more. Under the World Bank definition, the WTO membership currently has 105 developing economies, 30 of which are defined by the United Nations as LDCs. This includes 44 low income countries; 35 lower middle income countries; and 26 upper middle income countries. There are 40 high income WTO members (not counting the EU’s separate membership). The Cancun ministerial also recognized that upon ratification in their national parliaments, Cambodia and Nepal will accede to the WTO, both of which are LDCs. In addition to the individuals named above, Jason Bair, Etana Finkler, R. Gifford Howland, David Makoto Hudson, José Martinez-Fabre, Rona Mendelsohn, Jon Rose, and Richard Seldin made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Trade ministers from 146 members of the World Trade Organization (WTO), representing 93 percent of global commerce, convened in Cancun, Mexico, in September 2003. Their goal was to provide direction for ongoing trade negotiations involving a broad set of issues that included agriculture, nonagricultural market access, services, and special treatment for developing countries. These negotiations, part of the global round of trade liberalizing talks launched in November 2001 at Doha, Qatar, are an important means of providing impetus to the world's economy. The round was supposed to be completed by January 1, 2005. However, the Cancun Ministerial Conference ultimately collapsed without ministers reaching agreement on any of the key issues. GAO was asked to analyze (1) the divisions on key issues for the Cancun Ministerial Conference and how they were dealt with at Cancun and (2) the factors that influenced the outcome of the Cancun Ministerial Conference. Ministers attending the September 2003 Cancun Ministerial Conference remained sharply divided on handling key issues: agricultural reform, adding new subjects for WTO commitments, nonagricultural market access, services (such as financial and telecommunications services), and special and differential treatment for developing countries. Many participants agreed that attaining agricultural reform was essential to making progress on other issues. However, ministers disagreed on how each nation would cut tariffs and subsidies. Key countries rejected as inadequate proposed U.S. and European Union reductions in subsidies, but the U.S. and EU felt key developing nations were not contributing to reform by agreeing to open their markets. Ministers did not assuage West African nations' concerns about disruption in world cotton markets: The United States and others saw requests for compensation as inappropriate and tied subsidy cuts to attaining longer-term agricultural reform. Unconvinced of the benefits, many developing countries resisted new subjects--particularly investment and competition (antitrust) policy. Lowering tariffs to nonagricultural goods offered promise of increasing trade for both developed and developing countries, but still divided them. Services and special treatment engendered less confrontation, but still did not progress in the absence of the compromises that were required to achieve a satisfactory balance among the WTO's large and increasingly diverse membership. Several other factors contributed to the impasse at Cancun. Among them were a complex conference agenda; no agreed-upon starting point for the talks; a large number of participants, with shifting alliances; competing visions of the talks' goals; and North-South tensions that made it difficult to bridge wide divergences on issues. WTO decision-making procedures proved unable to build the consensus required to attain agreement. Thus, completing the Doha Round by the January 2005 deadline is in jeopardy.
You are an expert at summarizing long articles. Proceed to summarize the following text: Clear Air Force Station is the oldest missile warning site in North America. The installation supports the 13th Space Warning Squadron and the 213th Space Warning Squadron, Alaska Air National Guard. The 13th Space Warning Squadron is one of seven geographically separated units of the 21st Space Wing, Peterson Air Force Base, Colorado. Its mission is to provide combat capabilities through missile warning and missile defense and through space surveillance for the North American Aerospace Defense Command, U.S. Strategic Command, and the Missile Defense Agency. The mission of the 213th Space Warning Squadron is to operate and support the early warning radar. Since January 2001, these missions have been accomplished through the use of a Solid State Phased Array Radar System, which replaced the Ballistic Missile Early Warning System that had been in place since the installation became operational in 1961. The radar’s primary mission is to detect missile launches to determine whether there are incoming intercontinental ballistic missiles or sea-launched ballistic missiles threatening the United States or its allies. Its secondary mission is to detect, track, identify, and generate positional data for more than 9,500 manmade objects that are in orbit in space. Approximately 300 active-duty service members, Air National Guard personnel, DOD civilians, and contract employees support the missions at Clear Air Force Station. The developed portion of Clear Air Force Station can be separated into four main areas: (1) the composite area, where most administrative, recreational, and living quarters are located; (2) the camp area, where civil engineering, maintenance shops, and security police offices are located; (3) the radar site; and (4) the old technical site, where the old radar and associated support buildings and the plant are located. Other facilities associated with the plant include a coal yard, a cooling pond, and a rail spur—owned and operated by the Air Force—along which coal is delivered to the installation by Air Force-owned and Air Force-operated locomotives. (See fig. 1 for a map of Clear Air Force Station.) The existing combined heat and power plant at Clear Air Force Station is owned by the Air Force and became operational in 1961. The plant burns coal in three coal-fired boilers to produce steam, which generates power through the use of steam turbine generators. As a byproduct of electricity generation, the plant delivers steam to the installation for heating via underground steam lines. The plant has three steam turbine generators, each capable of producing 7.5 megawatts of power, for a total capacity of 22.5 megawatts. The boilers and turbines are interconnected so that any boiler can be linked to any turbine generator (see fig. 2). The Air Force’s standard operating procedure is to run two boilers and two turbine generators at the plant concurrently in order to provide backup in the event that one component fails and to better control the power system’s operating frequency. According to Air Force officials, having readily available backup steam in the winter is important because the start-up time for a boiler is about 4 to 5 hours, which is enough time for water pipes to freeze and cause damage to facilities in the meantime. Each boiler and generator runs for 8 months each year, and the boilers are rotated out of service for inspection and maintenance. In addition to the redundancy provided by running two boilers and two generators, the installation has a 300-kilowatt generator capable of providing electricity to the composite area in the event of an outage. A separate emergency power plant, which became operational in July 2012, would provide electricity and electrical heat for the radar in the event of failure of the central power plant. The Air Force currently does not have backup for heat in the composite area, which hosts most administrative, recreational, and housing facilities on the installation. The energy demand at Clear Air Force Station has decreased from when the plant first became operational, due to the radar replacement in 2001. The previous Ballistic Missile Early Warning System radar required 5 megawatts of power, while the new Solid State Phased Array Radar System requires only 1.1 megawatts of power—approximately 80 percent less. Currently, the total energy demand at the installation ranges from 3 megawatts in the summer to 6 megawatts in the winter, of which approximately 1 megawatt is needed to run the plant itself. In recent years, the plant has burned an average of about 53,900 tons of coal per year. Due to the Air Force’s standard operating procedure of running two boilers and two turbine generators concurrently to provide redundancy and backup, the plant’s energy production typically exceeds the demand, The excess power—that is, power resulting in excess steam and power.that is not consumed—is delivered to a load bank, a device that converts the power to heat and dissipates the heat into the air. Air Force Space Command data show that, in 2012, the command led the Air Force in facilities energy consumption, with the highest energy costs and the greatest energy consumption per square foot—a metric used by the Air Force to ascertain energy efficiency—among the major commands (see fig. 3). Facilities in Air Force Space Command consume approximately 98 percent of the command’s energy because its missions are facility-centric. consumes over 16 percent of the command’s facilities energy and has a facilities energy cost per square foot that is approximately twice the command’s average. Further, the installation’s average energy use per square foot is approximately seven times greater than the command’s average (1.39 million vs. 0.20 million BTUs). DOD defines facility energy to include energy needed to power fixed installations and nontactical vehicles. The Air Force has identified several projects that must be completed before it can close the power plant. These include constructing a new electric system connecting the installation to the local power grid, a heat system that will replace the heat function currently provided by the plant to the composite area, and a 1-megawatt backup generator to provide power to the composite area in the event of an outage on the local power grid. The portion of the power grid to which the installation will be connected is operated by Golden Valley Electric Association and is part of Alaska’s Railbelt transmission system, a single transmission line spanning 600 miles. According to a 2010 Department of Energy study, power exchanges among Alaska’s utilities are limited, and each public utility operates independently of others in different areas of Alaska. Golden Valley Electric Association is the only public utility located within the Clear Air Force Station service area. Figure 4 is a map of the Alaska Railbelt Transmission System near Clear Air Force Station that shows the locations of the installation, the Golden Valley Electric Association transmission line, and the proposed grid interconnection project. By September 2013, the Air Force had refined its total cost estimate for the transmission and heat-system projects.Air Force awarded a firm-fixed price order for an estimated $5.2 million against an existing General Services Administration contract with Golden Valley Electric Association, the local utility that provides power to the area where the installation is located. (The Air Force issued a 120-day suspension of work order for the contract in October 2013. The suspension was extended in February 2014 for another 60 days.) As part In late September 2013, the of the Air Force contract, the utility will construct a switching substation at the power line closest to the installation and a 3-mile-long transmission line that will go from the switching substation to another substation that will be located on Clear Air Force Station. The transmission line and the switching substation will be owned and maintained by the utility. After construction, the Air Force will obtain electricity from the utility. Separately from the contract with Golden Valley Electric Association, but as part of the electric system, the U.S. Army Corps of Engineers plans to award a design-build contract for installation of the electrical intertie, which would include a transformer, switchgear, and a distribution substation, to distribute power from the new transmission line to facilities on the installation. intertie, and this portion of the power system will be located on the installation. The contract to be awarded by the U.S. Army Corps of Engineers would also include a new heat system to replace the heat function currently served by the plant and a 1-megawatt generator capable of providing power to the composite area in the case of an outage on the local power grid. Heat and backup power are already in place for the radar site. The steam heating system would be owned and managed by the Air Force. The Air Force will own and manage the electrical The Air Force has plans to carry out these projects primarily using fiscal year 2013 funds from the Energy Conservation Investment Program, which is overseen by the Office of the Secretary of Defense (OSD). To the extent that funds from that source do not fully cover the costs of the projects, Air Force officials have said that they plan to make up the difference by using savings from other fiscal year 2012 funds that were previously provided by OSD. Air Force officials also said that costs may change as the Air Force enters into negotiations with contractors to refine the requirements of the components and design. A design-build contract is a contract with a single entity to deliver both design and construction of a project. The information provided in this report regarding cost is a subset of the information provided in our earlier, restricted report. The Clear Air Force Station plant is required to meet state and federal regulations related to air quality, among other environmental regulations. The plant currently operates under an air quality control operating permit issued by the Alaska Department of Environmental Conservation. This permit limits each of the three boilers at the plant to producing a maximum of 85,000 pounds of steam per hour. Further, under the permit, the Air Force has chosen to limit its coal consumption to fewer than 135,000 tons per year and has taken other steps to avoid being classified as a major source of hazardous air pollutants. The permit and the Air Force’s decision to limit its coal consumption effectively cap the plant’s current operations at 25 to 30 percent of its 22.5-megawatt capacity, or 5.6 to 7.0 megawatts. Figure 5 illustrates the different capacities of the plant and the current operation. Air Force officials told us that the plant has been operating at 5.7 to 7.0 megawatts for the past 10 years. If the Air Force should want to operate the existing plant at full capacity, it would have to address issues associated with regulations pertaining to air pollutants and air quality. For example, officials indicated that a new air model would have to be developed and reviewed by various environmental agencies to determine whether there was any deterioration in air quality at Denali National Park and Fairbanks, and the Air Force would potentially need to obtain a new permit. In order to obtain a new air permit, if needed, the operator of the plant may need to install or establish additional air quality controls and monitoring requirements. In assessing its options for power and heat generation at Clear Air Force Station, the Air Force undertook a variety of analyses, including—but not limited to—the feasibility study. The feasibility study estimated that the costs to close the plant would be less than half the costs to operate and maintain the plant as-is over the next 50 years. (The Air Force used a 50- year time frame because, according to the feasibility study, that is a standard lease period for an enhanced-use lease.) However, the Air Force could have included additional information in its feasibility study on the option to close the plant (e.g., heat system costs), particularly since this option was determined to be the most economical option for the Air Force and was identified as the option to follow if an acceptable lease offer was not received. Still, adding this information is unlikely to have materially influenced the Air Force’s choice of this option over the other options that remained after the enhanced-use lease and utilities privatization alternatives had been ruled out, since the expected cost savings differed so greatly between the closure option and the other remaining options. Further, the Air Force had additional noneconomic goals, such as reducing energy costs at the installation, which it included in its assessment and decision-making process. We assessed the Air Force’s processes against Air Force guidance on economic analyses and business-case analyses and its enhanced-use lease playbook, which is used to develop enhanced-use lease projects, and we found that the Air Force had generally followed its own guidance for preparing cost estimates and analyses of alternatives. As the Air Force narrowed the available options, it further refined its requirements and revised its cost estimates. The feasibility study is one step in the Air Force’s process for developing an enhanced-use lease project. The study addresses the potential uses of the asset considered for the enhanced- use lease and determines the highest and best use for the property, taking into account mission-related constraints. As a guide for preparing project proposals, the Air Force uses an enhanced-use lease playbook that was originally developed by its real-property agency. The feasibility study also generally follows guidelines laid out in the Air Force instruction for economic analyses and the Air Force manual for business-case analyses, which provide guidance on developing analyses like those included in the feasibility study. Further, since the Air Force submitted the grid connection and heat plant project for funding under the fiscal year 2013 Energy Conservation Investment Program, the Air Force also used specific guidance for that program, which requires developing a life-cycle cost analysis for the project. The Air Force’s Enhanced-Use Lease Playbook identifies five phases of an enhanced-use lease project. The first phase, project identification, includes identifying non-excess real property that presents a potential lease opportunity. During the second phase, project definition, stakeholders determine the feasibility of the proposed project by evaluating potential risks and returns for the project. Key tasks from this phase include conducting a site-orientation visit and preparing the feasibility study. The third phase, project acquisition, analyzes the viability of the project from operational, force-protection, environmental, and financial standpoints and identifies the type of consideration that will be sought from the lessee, which can be cash or in-kind. This phase includes developing a statement of need, hosting an Industry Day, advertising a request for qualifications, receiving proposals, and selecting the highest ranked offeror with which to conduct the lease negotiations. The final two phases—lease negotiation and closing and postclosing management— are undertaken after the third phase ends with the selection of a developer that will undertake the project. The Air Force took the following steps while developing the enhanced-use lease project for the power plant: Conducted a visit to Clear Air Force Station in May 2010, during which Air Force civil engineers performed a site survey of the installation’s water, waste treatment, and combined heat and power plant; conducted a site-orientation visit to the plant; and interviewed the local public utility, Golden Valley Electric Association. Prepared the feasibility study, which was finalized in November 2010. This study incorporated the status quo estimate that had been prepared by a contractor to assess the costs of operating the plant over the next 50 years in the same manner as in 2009 and replacing items with like items, as needed.baseline against which five options for operating the plant were compared. The status quo estimate was the Held an Industry Day on August 7, 2012, which was attended by representatives from several Alaska utilities, developers, and energy companies. During this event, the Air Force presented the enhanced- use lease opportunity, provided a tour of the power plant, and accepted questions from participants. The questions and the Air Force’s answers to those questions were subsequently posted on the website of the consulting firm the Air Force contracted with to support its execution of the enhanced-use lease project. Prepared a statement of need just prior to Industry Day. The playbook indicates that the statement of need is to be revised based on feedback received during Industry Day or updates required from answers provided as part of a question-and-answer document. Posted a final request for qualifications on the Federal Business Opportunities website.for qualifications is to be released within 3 weeks after Industry Day, to maintain interest and momentum from the event. It further notes that the response period is typically 6 weeks. The Air Force posted its request for qualifications on October 12, 2012, over 9 weeks after Industry Day. The deadline for proposals was December 7, 2012, or approximately 8 weeks later. The guidance also suggests that additional time may be given for more complicated projects, and Air Force officials told us they wanted to give potential lessees additional time to prepare their proposals. The playbook indicates that the final request The Air Force did not get beyond these steps in the enhanced-use lease process because it did not receive any proposals. Air Force guidance on economic and business-case analyses provides information on developing cost estimates for certain projects. Air Force guidance for economic analyses identifies circumstances under which an economic analysis is required, including new projects when total investment costs equal or exceed $2 million (in fiscal year 2011 constant dollars) and for any utilities privatization project. It further lays out special instructions for energy projects, which are to be evaluated in constant dollars and to use Department of Energy indices, which are published annually, for energy prices. Further, in the case of lease- purchase decisions and private sector-financed leases or service contracts involving energy projects, Energy Conservation Investment Program projects are to have a simple payback of 10 years or fewer and a minimum savings-to-investment ratio of 1.25 to meet DOD criteria. The Air Force guidance for business-case analyses describes such an analysis as a decision-support document that identifies alternatives and presents convincing business, economic, risk, and technical arguments for selection and implementation to achieve stated organization objectives or imperatives. Among other things, the benefits and total costs to the government should be developed over the full life cycle of the project for each alternative, and they should address the status quo. The guidance also notes that, for enhanced-use leases, these analyses focus on identifying the highest and best use for the fair market value of the asset and presenting the business, economic, and technical arguments in support of the project.found that the Air Force generally followed this guidance. Specifically, we found the following: Reviewing the Air Force’s feasibility study, we For each of the five options presented in the feasibility study, the Air Force included estimated costs and a comparison of those costs against each of the others and against the costs of continuing to operate the plant. Table 1 summarizes the total estimated costs for each of the five options presented in the feasibility study and for the status quo as well as the estimated cost savings to the Air Force for each of the options compared to the status quo. Estimated costs for each option were laid out in different broad categories, including operation and maintenance and repair and replacement of existing assets, among others. Further, these categories varied based on the characteristics of the option being described. For example, the first option involved replacing the current plant, with its capacity of 22.5 megawatts, with a smaller, 8-megawatt plant that would not be connected to the grid. Thus, the description of this option did not include costs for the grid connection and included a reduced estimate for operation and maintenance costs after the smaller plant became operational. Appendix II contains additional information on the estimated costs of the five options in comparison to each other and to the status quo. All costs were calculated over a 50-year period because the Air Force considered 50 years to be the expected term of an enhanced-use lease. Estimates were expressed in 2010 dollars to provide the same present value across all options. The feasibility study used a standard figure for the cost to purchase power under the enhanced-use lease and utilities privatization options and included a separate amount for the cost to purchase steam for heat. However, the option to close the plant included only the costs for purchasing power and did not clearly account for the costs of fuels to operate the new heat systems. In general, the Air Force projected that power costs would increase but that those increased costs would be offset by decreased capital and labor costs. The Air Force submitted the grid connection and heat plant project to OSD in January 2012, as a proposal for funding under the fiscal year 2013 Energy Conservation Investment Program. The Energy Conservation Investment Program seeks to fund projects that will produce improvements in energy consumption, cost, management, and security; one of the program’s objectives is to dramatically change the energy consumption at individual installations or joint bases. Funds for the program are allocated across four categories of projects: renewable energy, energy conservation, water conservation, and energy security. Proposals are evaluated based on several metrics, including the savings- to-investment ratio and the payback period, among others, and all proposals submitted to OSD must include a life-cycle cost analysis for the proposed project. The Air Force submitted the DD Form 1391, Military Construction Project Data, and the building life-cycle cost report for its proposed project along with a spreadsheet addressing the data elements requested in program guidance. Air Force officials told us that the building life-cycle cost estimate analysis is accepted by OSD and is the only tool the Air Force uses for assessing energy options. We reviewed the original DD Form 1391 that was prepared in January 2012 for the Clear Air Force Station project and submitted with the building life-cycle cost estimate as part of the submission to the Energy Conservation Investment Program. This estimate was subsequently revised in September 2013 as the Air Force refined its cost estimates and prepared to work on the contracts with Golden Valley Electric Association and the U.S. Army Corps of Engineers. Original Cost Estimate. The building life-cycle cost estimate in the original DD Form 1391 determined that annual energy costs would increase by $7.36 million, whereas annual recurring savings in operation and maintenance costs would be approximately $8.87 million per year. The estimate also identified capital projects valued at approximately $16.2 million in plant upgrades for the next 5 years, which would be avoided if the plant was decommissioned. savings-to-investment ratio was 8.42, and the payback period was 7.59 years. A higher savings-to-investment ratio indicates greater savings in comparison to the investment, and the OSD program manager told us that the Energy Conservation Investment Program funds many projects that have ratios between 1.4 and 2.0. The ratio for this project was significantly greater than the program requirement of 1.25. Also, the payback period was within the requirement of 10 years or fewer. Although the costs of power and diesel fuel are shown to be greater than the costs of coal in this projection, these costs are less than the anticipated savings in labor and capital costs. Revised Cost Estimate. In the revised DD Form 1391, annual cost savings from reduced operation and maintenance expenditures were estimated to result in a net first-year savings of $2.68 million. The same capital costs would be avoided. The savings-to-investment ratio was now 3.30 and the payback period was now about 6.62 years. This was a marked drop from the earlier 8.42 savings-to-investment ratio but was still greater than the 1.25 threshold. Additionally, the payback period was well within the requirement of 10 years or less. We talked to Air Force officials about the reasons for the decrease in expected savings for the grid tie-in and heat plant project in the second estimate. These officials told us that the original DD Form 1391 was based on information that had been collected from the raw steam and electrical output of the plant that is supplied to the composite area. They said that, because the energy is measured at the production point, much of the wasted energy in the generation process was captured in the operation and maintenance costs. According to the officials, the revised DD Form 1391 has more refined costs because an engineering heat analysis that modeled the heat consumption of the composite area buildings was performed in support of the design of the new heat plant. The officials said that the modeled data provided a more accurate picture based on seasonal and peak load conditions and enabled the Air Force to identify more accurate maintenance costs compared to those in the feasibility study. Another change was in the estimated costs to purchase power from Golden Valley Electric Association. The feasibility study estimated that Clear Air Force Station could purchase electricity at a price of 11.5 cents per kilowatt hour. As of April 2013, this price was estimated to be approximately 13.23 cents per kilowatt hour under the GS-3 industrial rate. Representatives from the union that represents plant personnel had questioned the accuracy of the Golden Valley Electric Association rate used by the Air Force, but Air Force officials told us that the Air Force had confirmed with Golden Valley Electric Association that it would receive the industrial rate, as opposed to the GS-2 commercial rate of almost 16 cents per kilowatt hour. The officials told us that this was because the Air Force, not the public utility, would be responsible for maintenance of the substation and switchgear that will be located on the installation. In reviewing the cost estimates for the five options for the plant, we found there were some items and associated costs that were not fully developed in the feasibility study but were later more fully developed as the Air Force took steps to carry out its plans. Air Force guidance on economic analyses indicates that minor costs or costs common to all of the alternatives being considered may be excluded when conducting a preliminary economic analysis. However, although the feasibility study presented the option to close the plant as the alternative the Air Force would pursue if it did not receive any proposals for an enhanced-use lease, the study did not fully document all of the expected costs for the plant-closure option. While adding this information is unlikely to have materially affected the Air Force’s decision to close the plant, fully developing those costs in the feasibility study would have provided decision makers with more complete information and a better understanding of the proposed actions. Although some cost details were not available at the time of the feasibility study, having a better description of the sources of costs and what actions the Air Force would need to take to provide a heat system for the composite area would have given decision makers a fuller picture of what the Air Force would need to buy or consider in assessing the costs of this option. In particular, it would have been useful to present an estimate of other associated costs, such as the labor or additional contract costs, over the same period. Two instances where we saw that costs for the plant-closure option were not fully developed were heat system costs and potential labor or contract costs. The analysis for the plant-closure option included a placeholder for a heat system with estimated costs of almost $13 million (in 2010 dollars) for boilers to heat the buildings. However, the feasibility study stated only that the buildings could be converted to electric heat, or the Air Force would buy and install package steam generators in 2015 to supply steam to the buildings. The information in the study did not specify the types of systems that the Air Force anticipated using, nor did it provide information on the source of the $13 million estimated cost. Further, there are likely to be other costs associated with the heat system, such as diesel storage tanks, possible modifications to the existing buildings to accept a different method of receiving heat, and a means of protecting existing water and sewage pipes that are currently kept warm by their proximity to the steam lines; these other costs were not presented in the study. Additionally, boilers have a shorter life span than a power plant. The Air Force’s revised building life-cycle cost estimate included a life cycle of approximately 20 years, which is OSD’s estimated economic life cycle for boilers. As a result, the Air Force would likely have to replace the boilers at least twice during the 50-year period covered by the plant-closure option. The Air Force’s estimated cost for the boilers in 2015 was $15 million. Applying this estimate to replacement boilers in years 26 and 46 would require an additional $7.6 million and $4.5 million, respectively, in 2010 dollars. We determined, and Air Force officials acknowledged, that this would result in a total increase of $12.1 million for this option, which would still be less costly than the status quo. The plant-closure option assumed that the plant would be closed after 2015 and that starting in 2016 there would be no operation and maintenance or general and administrative costs. However, there would still be some remaining costs for continuing to provide heat to the base, and they are not included in the estimate for the plant-closure option. The plant-closure option accounts for savings in labor costs, since the existing plant personnel would no longer be needed once the plant is decommissioned. While these costs would no longer be associated with operating and maintaining the plant, there would likely be some personnel costs incurred for this option. The plant currently provides both power and heat to the composite area, and connecting to the grid and purchasing electricity from Golden Valley Electric Association addresses only the provision of power. As described above, the Air Force will be installing heat systems for the composite area, which will require personnel to operate and maintain them. Air Force officials told us that operating the heat boilers could require two to eight personnel, depending on the number, type, and size of the heat system that is developed. If operations and maintenance were provided by the current base operating support contractor, then the contractor would likely require additional funding in order to assure this coverage. The Air Force would incur these costs either as labor costs or as increases in the base operating support contract. Air Force officials told us that they initially were considering a centralized steam plant with a high-pressure system—which would have required two operators at a time for 24 hours a day, 7 days a week—and that the Air Force would have likely retained civilian plant employees for this work. According to the officials, if the Air Force had pursued this higher- pressure option, which they said had been considered during the evaluation of heat plant options, it would have required eight dedicated plant personnel. The officials said that they would likely have retained some Air Force civilian employees, but about 75 percent fewer employees than the 34 positions identified in the feasibility study. The status quo estimate calculates the labor costs, which are part of the operation and maintenance costs, to be approximately $4.3 million per year in 2010 dollars. If almost one-fourth of the labor costs are added back into the model for 2016 and beyond, this will result in an increase of almost $22.5 million in this estimate (in 2010 dollars). Since the initial discussions, the Air Force has moved away from the high-pressure system and is now considering a medium-pressure heat source. Air Force officials told us that they now anticipate having the base operating support contractor operate and maintain the boilers. They estimated that funding this item will require approximately $257,000 per year above the existing contract, which would include labor, equipment, and supply components. We calculated that adding these contract costs to the estimate each year would have added an additional $5.7 million in 2010 dollars over the 50-year period, which would still make it less costly than the status quo. Despite these omissions, we found that the differences in the plant- closure cost estimate were unlikely to have materially affected the Air Force’s decision to close the plant. Specifically, the costs of all of the options where the plant remained open under Air Force operation were significantly higher than the costs of the option where the plant would be closed, even accounting for the omissions discussed above. There were some items that could have been more fully documented and included in the plant-closure option, particularly since this was the option the study recommended be pursued if the enhanced-use lease proposal were unsuccessful. However, while including those items that were omitted could have been helpful for decision makers and for clearly documenting differences between the options, the differences in those dollar amounts were unlikely to have materially affected the determination of overall savings compared to the status quo option. Regarding the heat boilers, for example, the Air Force has since refined its cost estimates as part of the building life-cycle cost estimates developed for the submission to the Energy Conservation Investment Program. The latest cost estimate for purchasing the boilers is now less than half of the amount estimated in the feasibility study. Further, the labor costs for 8 personnel would still be approximately 75 percent lower than the labor cost used in the feasibility study, which was for 34 personnel. Instead, the Air Force could face approximately $250,000 per year in labor and related costs. Even the original boiler estimate falls far below the estimated status quo cost. That is, the original boiler estimate—with two replacements over the period covered by the feasibility study, plus eight civilian positions to run a centralized heat plant—would have brought the estimated cost for the plant-closure option to about $273 million (in 2010 dollars), versus estimated status quo costs of almost $507 million over the same 50-year period. The Air Force identified goals other than cost savings in relation to the power plant at Clear Air Force Station. Specifically, in addition to its economic analyses of various power plant alternatives and the subsequent elimination of some options, the Air Force also considered other factors when making its decision regarding the future of the plant. These included the Air Force goal of no longer operating and maintaining the plant because the Air Force does not consider power generation to be a core competency, Air Force goal of reducing energy costs at Clear Air Force Station, Air Force need to ensure reliable power for current and future mission-critical facilities and supporting facilities. Taken together, these factors and the Air Force’s analyses formed the basis for the Air Force’s decision to close the plant once the grid connection, heat systems, and backup power sources are operational. These factors and their effect on the Air Force’s decision are discussed below. In the feasibility study, the Air Force indicated that one constraint for the study was that both Air Force Space Command and Clear Air Force Station stipulated that the Air Force did not wish to become a de facto utility with the assumption of resultant roles, responsibilities, and risks. To that end, the study stated that relieving the Air Force of the responsibility for operating and maintaining the plant was a primary test for determining an optimal alternative operating model for the plant. Of the five options presented in the feasibility study, the Air Force concluded that options 3 (lease) and 4 (privatize the plant) met this test. Options 1 (smaller plant with no grid connection) and 2 (smaller plant with grid connection), on the other hand, did not meet this test, since the Air Force would continue to operate and maintain a plant at Clear Air Force Station under those scenarios, as well as under the status quo option. In option 5, the plant would close so that no entity would be operating and maintaining a plant on the installation. Air Force officials told us that power plant maintenance and operations are not core competencies for the service, and the Air Force is seeking to move away from operating power-production facilities worldwide. The feasibility study highlighted that the Air Force was looking for ways to reduce the energy costs at Clear Air Force Station. As described earlier, the installation has an energy intensity, or energy consumption per square foot of building space, that is approximately seven times the average for Air Force Space Command installations, and its cost per square foot is about double that of the average for the command. Officials from Air Force Space Command and the Air Force Civil Engineer Center told us that the Air Force has a service-wide goal of reducing its energy intensity by 37.5 percent by 2020, and they explained how the plant at Clear Air Force Station fits into those larger energy goals. Within Air Force Space Command, the 21st Wing is seeking facilities energy reductions for its seven installations that report on energy. The energy- reduction project at Clear Air Force Station is a command priority, and the 21st Wing determined that connecting the installation to the grid will contribute greatly to the wing meeting its energy-efficiency goals. In Air Force Space Command’s estimate, eliminating on-site energy generation at Clear Air Force Station will reduce the installation’s annual energy consumption by about 85 percent, from approximately 800 million BTUs to 123 million BTUs. According to the Air Force Space Command instruction regarding utility reliability requirements in place at the time of the feasibility study, the missile warning radar system for Clear Air Force Station, Alaska, required 0.9999 annual utility availability, or 99.99 percent. This translated to a downtime of 53 minutes a year. The 2010 feasibility study stated that, in 2009, Golden Valley Electric Association’s system had a reliability of 99.99 percent, experiencing about 10 to 20 minutes of outages. The study assessed that Golden Valley Electric Association’s minimal system outages and the possibility that Clear Air Force Station could negotiate uninterruptible service with the utility would mean that Golden Valley Electric Association would likely serve as a reliable backup power source. Officials stated that DOD’s reliability standards apply solely to mission-critical facilities, and the radar is the only mission-critical facility at Clear Air Force Station. Therefore, according to these officials, reliability standards apply only to the radar and not to the composite area or the rest of the installation. Air Force officials explained that the new emergency power plant for the radar mission, which had been under construction at the time of the feasibility study, had since been completed and that it would provide the required backup power and heat for the radar mission. With both the emergency power plant and a connection to the grid, the radar could shift from the plant to the grid to acquire the electricity needed to provide power and heat to the radar facility. In our discussions with Air Force officials, we learned that the power plant had experienced two outages a few months earlier—the first outages in more than 16 years—highlighting the age and condition of the plant and the importance of backup power for the radar mission. Air Force and Missile Defense Agency officials also described upcoming projects to expand the emergency power plant capabilities by installing a third generator. Additional diesel storage tanks will also be constructed to ensure that additional fuel resources are on-site near the radar facility and available to support backup power generation. Although it is not considered mission-critical, the composite area also requires a reliable source of power and heat, due to the extreme temperatures that could quickly damage facilities and utility systems in the event of a power outage. Air Force officials told us that the planned 1-megawatt backup generator will provide the minimal power needed for the heat plants and electricity for the composite area if the installation loses grid power. They further stated that the tie-in to the electric grid will be configured in such a way that power could be brought in from a different direction should there be problems somewhere along the Golden Valley Electric Association transmission line. For example, if there is a power outage south of the installation that affects the Golden Valley Electric Association transmission line, power could be brought in from the north, and vice versa. The Air Force and the Missile Defense Agency have planned radar upgrades for Clear Air Force Station in the near future, but the Air Force has determined that these upgrades are not likely to have significant effects on Clear Air Force Station’s energy requirements. Since changes to the radar in 2001 had resulted in the significant reduction in power requirements for Clear Air Force Station, we discussed with appropriate Air Force and Missile Defense Agency officials the potential impact of these planned changes on the installation’s energy requirements and what confidence the Air Force had that the planned capability at the installation would be sufficient to support any adjusted energy requirements. In the feasibility study, the Air Force addressed the potential effects of the radar upgrades on Clear Air Force Station’s energy demand, stating that the new radar system was expected to consume an amount of power roughly equal to the power currently being shed to the load bank, which would result in no appreciable increase in electricity demand. As stated previously, energy demand at Clear Air Force Station ranges from 3 to 6 megawatts, and the power delivered to the load bank ranges from approximately 100 kilowatts in the winter to 1,000 kilowatts in the summer. Air Force Space Command officials summarized their assessment of the effect of the radar upgrade on energy requirements. According to these officials, there are two pending Missile Defense Agency projects that will influence the energy load and cost calculations at Clear Air Force Station. Of these projects, one was previously assessed as potentially requiring a temporary load increase during implementation and simultaneous operation but not a net increase in consumption once the transition is complete. For the other project, the Air Force did not have load figures but assessed that the project would not greatly increase the energy demand at Clear Air Force Station. In addition, the officials addressed the potential effects on energy demand of three other upcoming military construction projects at Clear Air Force Station. They told us that the Air Force had concluded that, overall, there would not be a net increase in energy demand, due in part to more energy-efficient construction. The Air Force considered and evaluated several options before selecting the option to close the plant after first connecting to the local grid and building a separate heat system. Officials said that they obtained ideas for the options they considered from stakeholders, including Clear Air Force Station, 21st Space Wing, and power plant employees, and fully evaluated some of the options that looked more promising. Still other options were considered but were not fully evaluated in formal studies because they did not generate as much savings or the Air Force did not consider them to be economically feasible. For example, the Air Force did not fully assess the costs of more incremental changes to current operations of the existing plant, such as retaining ownership of the plant but downscaling its operations, because extensive capital improvement costs would remain (although the costs of coal would be reduced). Among the options that it considered, the Air Force found that some options did not generate as much savings as other options and that some were not feasible from the Air Force’s perspective because the technical, practical, and mission challenges were viewed as too difficult to overcome. The Air Force pursued the option to solicit an outside entity to assume the plant’s operations and maintenance through an enhanced- use lease, but no outside entity ultimately submitted a proposal in response to the Air Force’s solicitation. Finally, as the U.S. Army Corps of Engineers further developed studies on the designs of the heat systems, various technical issues emerged, leading to changes in the design of the heat system. The Air Force began to consider what it should do with the power plant at Clear Air Force Station after it had identified the plant as operating inefficiently. As noted earlier, the 21st Space Wing had been looking at ways to improve efficiency and cut costs for the Clear Air Force Station power plant as far back as the 1990s, but Air Force Space Command believed that it was unable to pursue major changes until after emergency backup power for the installation’s mission was ensured. For this reason, the 21st Space Wing did not formally program any requirements prior to 2008 that would have led them to seek funding for such projects. In August 2009, the Air Force Real Property Agency prepared a briefing that referenced a 2008 concept opportunity study that identified the plant as underutilized and identified opportunities and challenges associated with an enhanced-use lease for the plant. For example, the briefing identified potential environmental review as a challenge that might undermine the value of the plant for a potential lessee. The same briefing discussed the establishment of a working group to conduct an opportunity analysis for the plant. In the same month, officials from Air Force Space Command, the 21st Civil Engineering Squadron, the Air Force Real Property Agency, and the Air Force Civil Engineer Support Agency met to discuss opportunities for utilities privatization or an enhanced-use lease of the plant. The meeting attendees discussed potential interest from Golden Valley Electric Association to acquire the plant’s excess energy and agreed to conduct a prefeasibility study to compare available options in order to determine the best approach for the power situation at Clear Air Force Station. The Air Force then prepared a draft concept opportunity study in June 2010, as a precursor to the feasibility study. The concept opportunity study was a qualitative study that identified strengths, weaknesses, opportunities, and threats for four options as they pertained to energy reliability; environmental requirements; potential for revenues, savings, or energy efficiency for the Air Force; and reductions in non- mission-critical resources and the time involved in plant functions. Three of the options were also studied in the feasibility study, which provided a quantitative comparison of the savings generated for the Air Force by each of the five options when compared to the status quo. Both studies considered the options for the enhanced-use lease, utilities privatization, and selling excess power. The possibility of closing the plant was first raised in the concept opportunity study and was studied further in the feasibility study. Air Force officials said that economic analyses drove the decision they made regarding the power plant and determined that some options were not economical because they did not generate as much savings for the Air Force. The concept opportunity study stated that the Air Force continuing to own and operate the plant would not be advantageous because the plant would continue to produce energy in excess of requirements, using old equipment. These officials said the feasibility study indicated that it would be cost-prohibitive to update the existing plant. The feasibility study estimated the costs for updating the existing plant in the near term as about $21 million. Those costs include items such as installing a new combustion-control system and replacing a boiler tube. Additionally, plant employees told us that, due to the age of the plant, replacement parts for plant equipment and controls have become difficult to find. The feasibility study found that options for the Air Force to operate a smaller replacement plant, with or without selling excess power from this smaller plant, would not generate as much savings for the Air Force as certain other options. The feasibility study also found that the utilities privatization option would generate slightly greater savings than an enhanced-use lease and that the option to close the plant would generate the most savings for the Air Force compared to the status quo.The feasibility study concluded that the Air Force should pursue the enhanced-use lease in order to obtain realistic valuations of the plant from potential lessees, or, if the lease project were unsuccessful, close the plant. Table 2 provides a summary of several of the options that the Air Force considered. The details of the options considered in either the concept opportunity study or the feasibility study are discussed below. 1. Replace the current plant with a plant sized at 8 megawatts: This option was not considered in the concept opportunity study but was included in the feasibility study. Under this scenario, the current plant would be replaced over a 5-year period with a plant sized for the energy demand at the installation at 8 megawatts. This option was shown to generate some savings compared to the status quo in the feasibility study, but not to generate as much savings as the other options. 2. Connect to the power grid and sell excess power: In both the concept opportunity study and the feasibility study, the Air Force considered connecting a plant on the installation to the local power grid in order to sell the excess power it would generate. The concept opportunity study considered the option for the existing plant to produce power in excess of mission requirements and sell the excess power through the grid connection. The study concluded that, in order for power sales to be economical, the plant would likely need to operate at its full 22.5-megawatt capacity. Under this scenario in the feasibility study, the Air Force would replace the existing plant with a plant sized at 8 megawatts and sell any excess power to a utility or another military base in Alaska. The feasibility study found that this option generated lower savings than other options and that revenues generated from the smaller plant would not cover the cost of the connection to the grid, because there would not be as much power in excess of the installation’s needs available for sale. Furthermore, there were some complications associated with operating the plant at its full 22.5-megawatt capacity: As previously stated, the Air Force currently does not operate the plant at full capacity in order to avoid having it classified as a major source for hazardous air pollutants. According to Air Force officials, increasing power production at the current plant would also require them to install new combustion and emission monitoring controls and to consider changes to the air quality control operating permit. Officials additionally said that an environmental analysis associated with obtaining a new permit would also be required, and that permit would take 2 to 3 years to obtain. Air Force officials indicated that the Air Force would not be able to sell power to private entities and that selling power to a public utility or other government entities was not economically viable.They also said that, if the Air Force sold power to other government entities, such as Army bases in Alaska, it would still incur the costs of capital improvements to the plant but would not be reimbursed for that investment. In addition, they said that they had considered selling excess power to the Army but determined that this option would not have been economical for the Army because the Army could buy any additional power it might need at a cheaper rate from the local public utility. 3. Lease the plant to a private entity or public utility through an enhanced-use lease: This option was considered in both the concept opportunity study and the feasibility study. Under this scenario in the concept opportunity study, the Air Force would negotiate with the lessee to purchase power and steam. The concept opportunity study recommended moving forward with a quantitative evaluation, or business-case analysis, of the enhanced-use lease option. Under this scenario in the feasibility study, the Air Force would pay for the connection from the existing plant to the grid and would negotiate a power and steam purchase agreement with the lessee. Additionally, the lessee would sell power to the market over the grid. The feasibility study makes the assumption that the lessee would replace the plant at a capacity of 22.5 megawatts and that the Air Force would reimburse the lessee for capital upgrades to the plant, while the revenue that the lessee generated through power sales would be deducted from the amount the Air Force would pay the lessee for the capital upgrades. The feasibility study found the enhanced-use lease to be the option that generated the third greatest savings among the options that were evaluated, close to the savings generated for one of the scenarios for utilities privatization. In February 2011, officials from the Air Force Real Property Agency briefed the Privatization Executive Steering Group and the Basing Requirement Review Panel, both of which concurred with the recommendation to pursue an enhanced- use lease. 4. Privatize the plant: This option was also considered in both the concept opportunity study and the feasibility study. Under this option in the concept opportunity study, the Air Force would sell the existing plant to a third party and negotiate a power and steam purchase agreement with the new owner. The Air Force evaluated two scenarios under this option in the feasibility study. Under both scenarios, the Air Force would pay for the connection from the plant to the grid, and the new owner would replace the plant up front (by 2020) at a capacity of 22.5 megawatts. As in the enhanced-use lease option, the Air Force would reimburse the new owner for plant upgrades, and the new owner would sell power to the Air Force and to the market over the grid. Revenue generated by the plant owner would be deducted from the amount the Air Force would reimburse the new owner for capital upgrades. In one scenario, the Air Force would pay its share of the owner’s capital investment; in the second scenario, the Air Force would compensate the owner for all of the capital investments. While the first scenario would generate some savings to the Air Force, the second scenario would generate costs rather than savings. The concept opportunity study identified several advantages that an enhanced-use lease would have over utilities privatization. For example, under the enhanced-use lease, the Air Force would have more flexibility to revert the equipment and operations back to Air Force control and, if desired, to purchase power through a grid interconnection in the future. According to this study, an enhanced- use lease could also better accommodate changes in mission requirements, energy pricing, and utility and environmental regulations. The feasibility study identified similar issues for consideration for both the enhanced-use lease and utilities privatization options, including that for either option to be attractive to an outside entity, a major upgrade of equipment would likely be required to enable the lessee or new owner to maximize the amount of excess power it could sell on the grid. 5. Close the plant: The concept opportunity study raised the possibility of connecting to the local power grid as the installation’s sole source of power, with backup diesel generators for power, and briefly identified some issues to take into account were the Air Force to consider this option. Under this option in the feasibility study, the Air Force would build the connection to the grid, install power and steam backup systems, purchase power from Golden Valley Electric Association, and shut down the plant. The costs associated with this option include approximately $22 million (in 2010 dollars) to decommission the existing plant. The option to close the plant generated the most savings for the Air Force compared to the other options considered in the feasibility study. Air Force officials said that other options were discussed but were not formally evaluated and documented in studies. For example, officials told us that they considered connecting to the local grid and then running the plant seasonally or running the plant to failure, that is, performing maintenance as needed but not making any major upgrades to extend the life of the plant. Some plant employees said they believed that the most efficient way to run the plant would be to run one boiler and one turbine generator instead of two and investing in the plant to continue its operation, for example by upgrading its combustion controls. In the scenario envisioned by plant employees, the plant would provide primary power and heat to the installation. But the installation could still establish a connection to the local grid for sale of electricity and install a separate heat system as backup power and heat for nonradar areas.employees believed that operating the plant would be less expensive under this scenario. Air Force officials said that they had considered running only one boiler and one turbine generator and had run the plant this way in the summer of 2012 as part of a study on ways to cut utility costs. They said that this option would reduce the costs of coal but would still require the Air Force to invest in extensive capital improvements to the plant and be responsible for the environmental liabilities of operating the plant. The Air Force did not fully assess the costs of this option, including its effects on labor and maintenance costs, for these reasons. Plant Since the Air Force began considering alternatives to its current operation of the power plant, it has taken steps to improve the reliability of its energy supplies. In particular, the availability of backup power generation and heat for the radar and related facilities and for the composite area means that the only service provided by the existing power plant that does not have an independent backup supply is the heating for the composite area. In the feasibility study, the Air Force did not formally evaluate the feasibility or cost of installing boilers to provide heat to the composite area as a backup to the existing plant because, as stated earlier, this study focused on those options that the Air Force considered to be economically feasible. Rather, boilers to provide heat for the composite area were considered only under the option to close the plant. However, as part of developing the enhanced-use lease project, the Air Force has subsequently taken steps to acquire a backup heat system, which is discussed later in this report. Table 2 provides a summary of several of the options that the Air Force considered. Although the results of the feasibility study showed that closing the plant would generate greater cost savings for the Air Force than an enhanced- use lease, the study recommended that the Air Force first pursue the enhanced-use lease. The study stated that the cost for a lessee to implement capital investments in the plant could possibly be lower than the estimates provided in the study and that the lessee might be able to capture revenue and increase the market value of the plant. Air Force officials said the Air Force pursued the enhanced-use lease in order to leverage industry knowledge and resources and seek creative solutions for keeping the plant open. One Air Force official said that proposals for the enhanced-use lease could have varied by offeror and led the Air Force in different directions than what was envisioned for the enhanced- use lease in the feasibility study. Additionally, negotiations with the highest-ranked offeror would have determined the final terms and conditions of the enhanced-use lease. As noted earlier, the Air Force released a statement of need in August 2012 to notify interested parties of the enhanced-use lease opportunity, and it held an Industry Day. The companies and public utility that participated in the Industry Day had an opportunity to ask questions, and officials from two of the entities we spoke with said that they had requested and been provided a separate tour of the plant. In October 2012, the Air Force released its final request for qualifications, in which it indicated that the lease would begin after the Air Force completed the project to provide a new heat plant and connect the installation to the power grid. Additionally, the document stated that the Air Force did not plan to incur additional expenses to maintain the plant after the completion of this project. Between the conclusion of the feasibility study in 2010 and the release of the statement of need in 2012, the Air Force’s plans for the enhanced-use lease changed due to several factors: At the conclusion of the feasibility study and as the Air Force began to develop the enhanced-use lease option, the Air Force intended to connect the plant’s electrical distribution system to the power grid and enter into an agreement with a lessee to obtain both power and steam for heat. Officials said that connecting the plant to the power grid would make the plant economically viable to the lessee, which might then be able to sell electricity to other customers through the power grid. The Air Force also intended to discuss recouping the cost of the connection from the lessee through the lease negotiation process. When the enhanced-use lease project was approved within the Air Force and sent to OSD for review, OSD determined that the project did not meet the conditions for an enhanced-use lease because a heat plant was not considered and the plant would still be needed for “public use” until the connection to the power grid was made. As a result of this review, the Air Force expanded the scope of its project to include a heating system and a backup generator. Additionally, the transmission line would now be connected to a substation on the installation rather than to the power plant’s electrical distribution system, as originally considered in the feasibility study. Officials said that this change resulted in a delay of about a year for the enhanced- use lease solicitation process to begin. The Air Force did not receive any responses to its request for qualifications, and officials said the Air Force determined that receiving no bids on the enhanced-use lease demonstrated that keeping the plant running did not make business sense. We spoke with representatives from two companies and a public utility that had attended Industry Day and with Air Force officials about possible reasons for this lack of industry interest in pursuing the enhanced-use lease. Among the things they cited were the following: Environmental standards: Air Force officials and a representative of the public utility we spoke with cited the costs of upgrading the power plant to meet environmental standards as a deterrent. One company’s representative said that because the Air Force’s energy demand at Clear Air Force Station is only a small percentage of the load capacity of the plant, the lessee would likely have to sell excess power to other customers. However, as discussed previously, if the plant operated at increased capacity, it would potentially be reclassified as a major source for hazardous air pollutants, which might necessitate additional controls and monitoring requirements. The representative for the public utility additionally cited concern with the level at which the plant would be allowed to produce output if a new permit could not be obtained and the length of time associated with obtaining a new permit. A representative from another company also cited the lengthy time associated with obtaining a new permit as a concern. The Air Force believed that a new permit would take 2 to 3 years to obtain. Need for upgrades: Air Force officials and company representatives said that the plant required major upgrades. A representative from one company we spoke with said that the Air Force would have required the lessee to upgrade the plant to meet government standards but that those standards were unclear in the information that the Air Force provided to potential lessees. A representative from the public utility said that in order for the plant to meet environmental standards, it would need to upgrade the central plant control system. The utility conducted its own assessment of needed plant upgrades and found that additional repairs may be needed. Uncertainty of the plant’s profitability: A representative from the public utility told us that the utility conducted its own assessment of the plant and found that even if the plant could be run at its full capacity after obtaining the necessary environmental permits, production would be more costly than that utility’s other power- production alternatives. The utility’s assessment also found the estimated cost per megawatt hour would be higher than the utility had expected. Transformer with greater transmission capacity needed: One company’s representative with whom we spoke also cited the transformer’s transmission capacity as a potential issue that would affect the company’s ability to sell excess power. The transformer the Air Force planned to buy and place on the installation to receive power from Golden Valley Electric Association did not have the capability to increase the transmission voltage in order to deliver electricity back to the power grid for sale to outside customers. Additionally, since the transmission line as described in the request for qualifications for the enhanced-use lease would not be connected to the power plant, an electrical connection would need to be made between the power plant and the substation on the installation with a transformer capable of increasing the voltage. Air Force officials said that the Air Force did not want to incur the additional cost of purchasing a transformer that could deliver electricity back to the grid, because a larger capacity transformer would potentially not add value for the Air Force. Competition regarding rates: One Air Force official and a representative from a company we spoke with said that Golden Valley Electric Association, the local utility, was perceived to have advantages over other companies in negotiating the enhanced-use lease, because the Air Force was already planning to build the transmission line connecting the installation to the power grid operated by Golden Valley Electric Association. Air Force officials said they did not want to commit during Industry Day to buying electricity or steam for heat from any potential lessee but instead told participants to include a power sale offer in their proposals. One company’s representative said that he believed the Air Force would opt to buy electricity at the least-cost rate by comparing the rate offered by Golden Valley Electric Association to the rate offered by the lessee. Therefore, any lessee other than Golden Valley Electric Association would have had to compete with Golden Valley Electric Association’s rate. This representative also believed that the Air Force would obtain heat from the lessee, because that would be less expensive than potentially using oil-fired generators to heat the composite area. Air Force officials provided a similar assessment. Disposal of coal ash: One company’s representative said that the treatment of the ash produced from the burning of coal was a concern. Due to an Air Force decision not to allow the potential lessee to use the landfill located on the installation that is primarily used for the disposal of the coal ash produced from burning coal, the lessee would have to find a solution offsite. The representative said that, based on the company’s operation of other coal plants, finding a solution to the disposal of coal ash is a difficult issue for the company. Air Force officials said that the Air Force’s decision was based on concerns about long-term risks and environmental concerns for the Air Force if it were to let the potential lessee use the landfill. Available alternatives to Clear Air Force Station plant: Air Force officials said that Golden Valley Electric Association had expressed interest in the plant in the past. However, in the interim the utility had proceeded to take the steps necessary to reopen a dormant power plant that would have more than twice the capacity of the Clear Air Force Station plant. The Golden Valley Electric Association representative we spoke with said that the company will invest heavily in capital improvements at that plant and did not know if the risk with the Clear Air Force Station plant would be worthwhile. In the feasibility study, the option to close the plant was the only option that included installing a new heat system. Under the other options considered in the feasibility study, the Air Force made the assumption that it would be able to obtain heat from a replacement plant or from the lessee or new owner. As discussed above, in response to OSD’s review of the enhanced-use lease, the Air Force revised its plans to include construction of a new heat system for the installation. As the U.S. Army Corps of Engineers conducted studies to further refine the design of the heat system, various technical issues emerged, leading to changes in the design. The Air Force first considered building a central heat system for the installation. Officials said that if the enhanced-use lease had succeeded, the Air Force would have negotiated with the lessee to purchase the steam from the plant and use it as the primary source of heat, using the new central heat system only in circumstances where it could not obtain heat from the lessee. The Air Force asked the U.S. Army Corps of Engineers to begin designing a heat system in April 2012, before it released its request for qualifications for the enhanced-use lease. Around this time, the Air Force asked the U.S. Army Corps of Engineers to conduct a study to evaluate the relative advantages of a central versus a decentralized heat system.that a decentralized heat system would cost less than a centralized heat system. Air Force officials said that the concern was about what type of fuel would be used for the heat system. The design of the heat system was put on hold for a month, in September 2012, while Air Force officials reviewed the completed study and finalized consensus on the preferred heating system. Some Air Force officials thought that whether the enhanced-use lease succeeded would affect the design of the heat system. Air Force officials decided to restart the design with a decentralized heat system in October 2012, prior to the December 2012 due date for the responses to the request for qualifications for the enhanced-use lease. Air Force officials said that they concluded from the August study that a decentralized heat plant would be preferable whether or not the enhanced-use lease succeeded. Air Force and Army officials told us that, as the Army conducted further studies on the decentralized heat system, several technical issues emerged with the design, including concerns regarding how to keep the water pipelines from freezing. Officials said that these technical issues, which were at times associated with high costs, affected the direction of the design for the heat system. In April 2013, the U.S. Army Corps of Engineers studied the costs of using three to four low-pressure steam heat plants, which would resolve the technical complication that had emerged. However, this revised design raised new concerns, such as the logistics of refueling the heat plants each day using small vehicles and the proximity of the heat plants to the buildings where personnel are located. As of February 2014, the U.S. Army Corps of Engineers had not completed the design of the heat system, but the currently preferred design is two medium-sized buildings, each containing three steam boilers. The third boiler in each building would serve as backup for the other two. Additionally, if one of the building’s boilers failed, Air Force officials said that the other building’s boilers would be able to supply enough heat for the entire composite area. Officials said that using this configuration would address the concerns raised by previous designs of the heat system, including fueling logistics and proximity to personnel. In written comments on a draft of our restricted report, the Air Force concurred with our observations. The Air Force noted that, overall, our report documents the extensive studies and analyses that the Air Force conducted. The Air Force noted that it was these studies and analyses that led to the Air Force’s ultimate decision to tie to the electrical grid, build supplemental heat plants, and eventually decommission the central heat and power plant. The Air Force stated that it concurred with the draft restricted report, with comments. These comments were technical in nature and were incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretary of the Air Force. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To determine the extent to which the Air Force has evaluated options for the Clear Air Force Station combined heat and power plant, we reviewed the documentation for the project, including the 2010 feasibility study, contract data, Department of Defense (DOD) and Air Force guidance, and the Air Force analyses used to document and support the service’s final determination for the plant, including the environmental assessment and subsequent finding of no significant impact for the tie-in to the local grid and construction of a new heat system. Specifically, we reviewed the Air Force’s guidance on economic analyses and business-case analyses and its enhanced-use lease playbook that is used to develop enhanced-use lease projects. We reviewed the documentation the Air Force provided us and talked with appropriate Air Force officials at Headquarters Air Force, Air Force Space Command, 21st Space Wing, and the Air Force Civil Engineer Center. We compared the Air Force’s documentation and actions against the guidelines provided in the Air Force’s guidance. We also reviewed the Air Force’s November 2010 feasibility study that assessed the estimated costs of maintaining the status quo at the Clear Air Force Station power plant against five options. We looked at the economic analyses for each option and the status quo and reviewed the calculations for the estimated costs provided. We assessed the study’s assumptions against Air Force guidance on economic analyses and business-case analyses. We discussed the studies, analyses, contracts, and other documentation with appropriate officials from Headquarters Air Force, Air Force Space Command, 21st Space Wing, Clear Air Force Station, the Air Force Civil Engineer Center, and the U.S. Army Corps of Engineers. We reviewed the Air Force’s analyses regarding its decision to close the power plant; however, we did not analyze all of the underlying data used to support those analyses. We also met with officials from Usibelli Coal Mine and Golden Valley Electric Association. Further, we spoke with Defense Logistics Agency–Energy officials about the existing coal contract as well as current and potential future contracts for other fuel sources, such as diesel. Finally, we interviewed Missile Defense Agency officials for information on their roles in the current decision and the potential effect of future radar upgrades on the installation’s energy needs. To determine what other options, if any, the Air Force considered before deciding on the alternative power source it selected, we reviewed the Air Force’s analyses on the options it considered, including the concept opportunity study, which first laid out some options for the plant, and the feasibility study. We also reviewed documentation related to additional analyses that were not included in those two studies. We spoke with appropriate officials from Headquarters Air Force, Air Force Space Command, 21st Space Wing, Clear Air Force Station, the Air Force Civil Engineer Center, and the U.S. Army Corps of Engineers regarding how the options for the plant were vetted and the factors that the Air Force took into account in its decision making. Additionally, we spoke with representatives of Doyon Utilities, Golden Valley Electric Association, and Aurora Energy and with plant employees regarding their perspectives on the enhanced-use lease process. Table 3 below identifies the organizations and offices that we contacted during our review. We conducted this performance audit from October 2013 through May 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In August 2009, Air Force Space Command requested the establishment of a working group comprising personnel from Clear Air Force Station, the 21st Space Wing, the Air Force Civil Engineer Support Agency, the Air Force Real Property Agency, and the major command to develop a feasibility study for the plant. This team developed a set of five possible options for the plant that represented the highest-ranked and best known alternatives based on the experience and knowledge of the team. In May of 2010, the Air Force Civil Engineer Support Agency and the Air Force Real Property Agency conducted a joint visit to Clear Air Force Station in which team members performed a site survey of the installation’s water, waste treatment, and power plant; conducted a site-orientation visit of the plant; and interviewed representatives from the local utility, Golden Valley Electric Association. This site survey and site-orientation visit, along with additional market and technical research, became the basis for the feasibility study. The working group also agreed to use the Government Should Cost Estimate as the basis for developing cost models to analyze each of the five proposed operating models in the feasibility study. The feasibility study laid out five options, which were compared against the baseline costs, or status quo, of operating the plant. The company CH2M Hill, under contract to the Air Force, developed the status quo analysis, which was termed the Government Should Cost Estimate and which assumed that (1) the power plant would continue to be operated in the same manner as in 2009 and (2) the equipment, buildings, and inventory would be replaced with inventory similar to what is currently in place. The status quo identified estimated costs over the 50-year period of the analysis, including the following: Annual operation and maintenance costs of $8.87 million per year. These costs include the labor costs for Air Force civilian personnel to operate the plant, fuel costs, the cost of contracted maintenance services, and the cost associated with environmental permits. Power plant employees operate and maintain the plant equipment, whereas the base operating support contractor conducts basic maintenance of the installation’s buildings, including lights and ventilation. Annual general and administrative costs of $1.12 million per year. Costs for repairing and replacing power plant components over the 50-year period (termed “R&R” costs), which totaled $392.55 million. Costs for life-extension projects expected to occur in the initial 5 years of the analysis period (2010 through 2014), which totaled $22.73 million. These initial system deficiency correction items were identified by plant personnel as needing immediate attention and were documented as part of the CH2M Hill site survey. The repair and replacement and initial system deficiency corrections are intended to increase the life of the current plant until 2030, at which point the status quo estimate assumes that the Air Force will need to replace the existing plant at a cost of $254.99 million. At this point the plant would be 69 years old. The costs that make up the Government Should Cost Estimate are summarized in table 4 below, which represents the estimated costs of continuing to operate and maintain the existing plant for the next 50 years. These estimates are presented as net present value in 2010 funds. According to the feasibility study, a cost model analysis was developed to determine the Air Force’s capital investment requirements and the average costs of power generation for each of the plant options. The Air Force used the output from the cost models to conduct a comparative analysis of the options to determine the optimum path forward for the plant. In the comparative analysis, the economic metric used to evaluate the five options was net present value—the sum of all future cash outflows minus inflows discounted to 2010 dollars, calculated over a 50- year period. The Air Force used the 50-year time frame because it considered a 50-year lease to be most likely to be signed. Each option is evaluated over a 50-year period to be consistent with the status quo estimate. We reviewed the broad cost estimates presented for the five options and the status quo in the feasibility study and verified them against the costs presented in the supporting tables for each option. These costs from the Air Force’s summary table were presented in table 1. In addition to the contact named above, Maria Storts (Assistant Director), Karyn Angulo, Michael Armes, James Ashley, Heather Krause, Ron La Due Lake, Joanne Landesman, Jon Ludwigson, Nadji Mehrzad, Anne Rhodes-Kline, Michael Shaughnessy, Amie Steele Lesser, and Weifei Zheng made key contributions to this report.
Clear Air Force Station, located in the interior of Alaska where temperatures can drop as low as -60 o Fahrenheit, currently generates its own heat and power from a coal-fired combined heat and power plant. The station performs a critical radar mission for the Department of Defense, for which it is vital to have reliable sources of heat and power. Air Force Space Command has determined that the existing 50-year-old plant is operating inefficiently, and the Air Force plans to close the existing plant, after first connecting to the local power grid for electricity and constructing a new heat system for the administrative and residential areas of the installation. GAO was asked to review the Air Force's feasibility study and analyses of alternatives before the Air Force closes the plant. This report addresses (1) the extent to which the Air Force evaluated options regarding the Clear Air Force Station combined heat and power plant and (2) what other options, if any, the Air Force considered before deciding on the alternative power source it selected. GAO reviewed the feasibility study; Department of Defense and Air Force guidance; and other analyses, contract information, and documentation related to the power plant. GAO also issued a restricted version of this report, which includes additional details on some estimated costs. In written comments on a draft of the restricted report, the Air Force concurred with GAO's observations. The Air Force's decision to close the existing power plant at Clear Air Force Station is based, in part, on a 2010 study examining the feasibility of implementing alternative power sources at the installation in order to reduce operating costs while ensuring reliable power for the installation's mission. This study, along with other associated studies and analyses, initially led the Air Force to pursue leasing the plant to a private-sector entity or public utility. When no lease proposals were submitted, the Air Force pursued the option to close the plant, finding that the estimated costs of closing it were significantly less than the estimated costs of continuing to operate and maintain it. GAO found that the Air Force generally followed its own guidance for preparing cost estimates and analyses of alternatives. However, in the plant-closure option considered in the feasibility study, some costs—such as labor costs for operating and maintaining the new heat system—were not fully developed. While it is unlikely that adding this information would have materially affected the final outcome, more fully developing those costs would have provided decision makers with more complete information and a better understanding when considering the proposed options. In addition to economic factors, several noneconomic goals significantly influenced the Air Force's decision concerning the power plant, including the goals of no longer operating and maintaining a power plant, reducing energy costs, and ensuring reliable power for current and future missions. The Air Force considered and evaluated several options for the plant's future before selecting the option to close the plant after first connecting to the local power grid and building a separate heat system. Officials said that they obtained ideas from stakeholders for the options they considered and evaluated in detail some of the options that looked more promising. Still other options were considered but were not fully evaluated because they did not generate as much savings or the Air Force did not consider them to be economically feasible. For example, the Air Force looked in detail at options for leasing the plant but did not fully assess the costs of more incremental options, such as retaining ownership of the plant but downscaling its operations. For the options that the Air Force evaluated in detail, it found that some generated significantly more savings than others and that some were not feasible from the Air Force's perspective.
You are an expert at summarizing long articles. Proceed to summarize the following text: VA’s mission is to promote the health, welfare, and dignity of all veterans in recognition of their service to the nation by ensuring that they receive medical care, benefits, social support, and lasting memorials. Over time, the use of IT has become increasingly crucial to the department’s efforts to provide such benefits and services. For example, the department relies on its systems for medical information and records for veterans, as well as for processing benefit claims, including compensation and pension and education benefits. In reporting on VA’s IT management over the past several years, we have highlighted challenges that the department has faced in achieving its “One VA” vision, including that information systems and services were highly decentralized and that its administrations controlled a majority of the IT budget. For example, we noted that, according to an October 2005 memorandum from the former CIO to the Secretary of Veterans Affairs, the CIO had direct control over only 3 percent of the department’s IT budget and 6 percent of the department’s IT personnel. In addition, in the department’s fiscal year 2006 IT budget request, the Veterans Health Administration was identified to receive 88 percent of the requested funding, while the department was identified to receive only 4 percent. We have previously pointed out that, given the department’s large IT funding and decentralized management structure, it was crucial for the CIO to ensure that well-established and integrated processes for leading, managing, and controlling investments were followed throughout the department. Further, a contractor’s assessment of VA’s IT organizational alignment, issued in February 2005, noted the lack of control for how and when money is spent. The assessment found that project managers within the administrations were able to shift money as they wanted to build and operate individual projects. In addition, according to the assessment, the focus of department-level management was only on reporting expenditures to the Office of Management and Budget and Congress, rather than on managing these expenditures within the department. The department officially began its initiative to provide the CIO with greater authority over the department’s IT in October 2005. At that time, the Secretary of Veterans Affairs issued an executive decision memorandum that granted approval for the development of a new centralized management structure for the department. According to VA, its goals in moving to centralized management included having better overall fiscal discipline over the budget. In February 2007, the Secretary approved the department’s new management structure. In this new structure, the Assistant Secretary for Information and Technology serves as VA’s CIO and is supported by a principal deputy assistant secretary and five deputy assistant secretaries—senior leadership positions created to assist the CIO in overseeing functions such as cyber security, IT portfolio management, and systems development and operations. In April 2007, the Secretary approved a governance plan that is intended to enable the Office of Information and Technology, under the leadership of the CIO, to centralize its decision making. The plan describes the relationship between IT and departmental governance and the approach the department intends to take to enhance governance and realize more cost-effective use of IT resources and assets. The department also made permanent the transfer of its entire IT workforce under the CIO, consisting of approximately 6,000 personnel from the administrations. In June 2007, we reported on the department’s plans for realigning the management of its IT program and establishing centralized control of its IT budget within the Office of Information and Technology. We pointed out that the department’s realignment plans included elements of several factors that we identified as critical to a successful transition, but that additional actions could increase assurance that the realignment would be completed successfully. Specifically, we reported that the department had ensured commitment from its top leadership and that, among other critical actions, it was establishing a governance structure to manage resources. However, at that time, VA had not updated its strategic plan to reflect the new organization. In addition, we noted that the department had planned to take action by July 2008 to create the necessary management processes to realize a centralized IT management structure. In testimony before the House Veterans’ Affairs Committee last September, however, we pointed out that the department had not kept pace with its schedule for implementing the new management processes. As part of its IT realignment, VA has taken important steps toward a more disciplined approach to ensuring oversight of and accountability for the department’s IT budget and resources. Within the new centralized management structure, the CIO is responsible for ensuring that there are adequate controls over the department’s IT budget and for overseeing capital planning and execution. These responsibilities are consistent with the Clinger-Cohen Act of 1996, which requires federal agencies to develop processes for the selection, control, and evaluation of major systems initiatives. In this regard, the department has (1) designated organizations with specific roles and responsibilities for controlling the budget to report directly to the CIO; (2) implemented an IT governance structure that assigns budget oversight responsibilities to specific governance boards; (3) finalized an IT strategic plan to guide, manage, and implement its operations and investments; (4) completed multi-year budget guidance to improve management of its IT; and (5) initiated the implementation of critical management processes. However, while VA has taken these important steps toward establishing control of the department’s IT, it remains too early to assess their overall impact because most of the actions taken have only recently become operational or have not yet been fully implemented. Thus, their effectiveness in ensuring accountability for the resources and budget has not yet been clearly established. As one important step, two deputy assistant secretaries under the CIO have been assigned responsibility for managing and controlling different aspects of the IT budget. Specifically, the Deputy Assistant Secretary for Information Technology Enterprise Strategy, Policy, Plans, and Programs is responsible for development of the budget and the Deputy Assistant Secretary for Information Technology Resource Management is responsible for overseeing budget execution, which includes tracking actual expenditures against the budget. Initially, the deputy assistant secretaries have served as a conduit for information to be used by the governance boards. As a second step, the department has established and activated three governance boards to facilitate budget oversight and management of its investments. The Business Needs and Investment Board; the Planning, Architecture, Technology and Services Board; and the Information Technology Leadership Board have begun providing oversight to ensure that investments align with the department’s strategic plan and that business and budget requirements for ongoing and new initiatives meet user demands. One of the main functions of the boards is to designate funding according to the needs and requirements of the administrations and staff offices. Each board meets monthly, and sometimes more frequently, as the need arises during the budget development phase. The first involvement of the boards in VA’s budget process began with their participation in formulating the fiscal year 2009 budget. As part of the budget formulation process, in May 2007 the Business Needs and Investment Board conducted its first meeting in which it evaluated the list of business projects being proposed in the budget using the department’s Exhibit 300s for fiscal year 2009, and made departmentwide allocation recommendations. Then in June, these recommendations were passed on to the Planning, Architecture, Technology, and Services Board, which proposed a new structure for the fiscal year 2009 budget request. The recommended structure was to provide visibility to important initiatives and enable better communication of performance results and outcomes. In late June, based on input from the aforementioned boards, the Information Technology Leadership Board made recommendations to department decision makers for funding the major categories of IT projects. In July 2007, following its work on the fiscal year 2009 budget formulation, the boards then began monitoring fiscal year 2008 budget execution. However, according to Office of Information and Technology officials, with the governance boards’ first involvement in budget oversight having only recently begun (in May 2007), and with their activities to date being primarily focused on formulation of the fiscal year 2009 budget and execution of the fiscal year 2008 budget, none of the boards has yet been involved in all stages of the budget formulation and execution processes. Thus, they have not yet fully established their effectiveness in helping to ensure overall accountability for the department’s IT appropriations. In addition, the Office of Information and Technology has not yet standardized the criteria that the boards are to use in reviewing, selecting, and assessing investments. The criteria is planned to be completed by the end of fiscal year 2008 and to be used as part of the fiscal year 2010 budget discussions. Office of Information and Technology officials stated that, in response to operational experience with the 2009 budget formulation and 2008 budget execution, the department plans to further enhance the governance structure. For example, the Office of Information and Technology found that the boards’ responsibilities needed to be more clearly defined in the IT governance plan to avoid confusion in roles. That is, one board (the Business Needs and Investment Board) was involved in the budget formulation for fiscal year 2009, but budget formulation is also the responsibility of the Deputy Assistant Secretary for Information Technology Resource Management, who is not a member of this board. According to the Principal Deputy Assistant Secretary for Information and Technology, the department is planning to update its governance plan by September 2008 to include more specificity on the role of the governance boards in the department’s budget formulation process. Such an update could further improve the structure's effectiveness. In addition, as part of improving the governance strategy, the department has set targets by which the Planning, Architecture, Technology, and Services Board is to review and make departmentwide recommendations for VA’s portfolio of investments. These targets call for the board to review major IT projects included in the fiscal year budgets. For example, the board is expected to review 10 percent for fiscal year 2008, 50 percent for fiscal year 2009, and 100 percent for fiscal year 2011. As a third step in establishing oversight, in December 2007, VA finalized an IT strategic plan to guide, manage, and implement its operations and investments. This plan (for fiscal years 2006-2011) aligns Office of Information and Technology goals, priorities, and initiatives with the priorities of the Secretary of Veterans Affairs, as identified in the VA strategic plan for fiscal years 2006-2011. In addition, within the plan, the IT strategic goals are aligned with the CIO’s IT priorities, as well as with specific initiatives and performance measures. This alignment frames the outcomes that IT executives and managers are expected to meet when delivering services and solutions to veterans and their dependents. Further, the plan includes a performance accountability matrix that highlights the alignment of the goals, priorities, initiatives, and performance measures, and an expanded version of the matrix designates specific entities within the Office of Information and Technology who are accountable for implementation of each initiative. The matrix also establishes goals and time lines through fiscal year 2011, which should enable VA to track progress and suggest midcourse corrections and sustain progress toward the realignment. As we previously reported, it is essential to establish and track implementation goals and establish a timeline to pinpoint performance shortfalls and gaps and suggest midcourse corrections. As a fourth step, the department has completed multi-year budget guidance to improve management of its IT portfolio. In December 2007, the CIO disseminated this guidance for the fiscal years 2010 through 2012 budgets. The purpose of the guidance is to provide general direction for proposing comprehensive multi-year IT planning proposals for centralized review and action. The process called for project managers to submit standardized concept papers and other review documentation in December 2007 for review in the January to March 2008 time frame, to decide which projects will be included in the fiscal year 2010 portfolio of IT projects. The new process is to add rigor and uniformity to the department’s investment approach and allow the investments to be consistently evaluated for alignment with the department’s strategic planning and priorities and the enterprise architecture. According to VA officials, this planning approach is expected to allow for reviewing proposals across the department and for identifying opportunities to maximize investments in IT. Nevertheless, although the multi-year programming guidance holds promise for obtaining better information for portfolio management, the guidance has not been fully implemented because it is applicable to future budgets (for fiscal years 2010 through 2012). As a result, it is too early to determine VA’s effectiveness in implementing this guidance, and ultimately, its impact on the department’s IT portfolio management. Finally, the department has begun developing new management processes to establish the CIO’s control over the IT budget. The department’s December 2007 IT strategic plan identifies three processes as high priorities for establishing the foundation of the budget functions: project management, portfolio management, and service level agreements. However, while the department had originally stated that its new management processes would be implemented by July 2008, the IT strategic plan indicates that key elements of these processes are not expected to be completed until at least fiscal year 2011. Specifically, the plan states that the project and portfolio management processes are to be completed by fiscal year 2011, and does not assign a completion date for the service level agreement process. As our previous report noted, it is crucial for the CIO to ensure that well- established and integrated processes are in place for leading, managing, and controlling VA’s IT resources. The absence of such processes increases the risk to the department’s ability to achieve a solid and sustainable management structure that ensures effective IT accountability and oversight. Appendix I provides a timeline of the various actions that the department has undertaken and planned for the realignment. In summary, while the department has made progress with implementing its centralized IT management approach, effective completion of its realignment and implementation of its improved processes is essential to ensuring that VA has a solid and sustainable approach to managing its IT investments. Because most of the actions taken by VA have only recently become operational, it is too early to assess their overall impact. Until the department carries out its plans to add rigor and uniformity to its investment approach and establishes a comprehensive set of improved management processes, the department may not achieve a sustainable and effective approach to managing its IT investments. Mr. Chairman and members of the Subcommittee, this concludes my statement. I would be pleased to respond to any questions that you may have at this time. For more information about this testimony, please contact Valerie C. Melvin at (202) 512-6304 or by e-mail at [email protected]. Key contributors to this testimony were Barbara Oliver, Assistant Director, Nancy Glover, David Hong, Scott Pettis, and J. Michael Resser.
The use of information technology (IT) is crucial to the Department of Veterans Affairs' (VA) mission to promote the health, welfare, and dignity of all veterans in recognition of their service to the nation. In this regard, the department's fiscal year 2009 budget proposal includes about $2.4 billion to support IT development, operations, and maintenance. VA has, however, experienced challenges in managing its IT projects and initiatives, including cost overruns, schedule slippages, and performance problems. In an effort to confront these challenges, the department is undertaking a realignment to centralize its IT management structure. This testimony summarizes the department's actions to realign its management structure to provide greater authority and accountability over its IT budget and resources and the impact of these actions to date. In developing this testimony, GAO reviewed previous work on the department's realignment and related budget issues, analyzed pertinent documentation, and interviewed VA officials to determine the current status and impact of the department's efforts to centralize the management of its IT budget and operations. As part of its IT realignment, VA has taken important steps toward a more disciplined approach to ensuring oversight of and accountability for the department's IT budget and resources. For example, the department's chief information officer (CIO) now has responsibility for ensuring that there are controls over the budget and for overseeing all capital planning and execution, and has designated leadership to assist in overseeing functions such as portfolio management and IT operations. In addition, the department has established and activated three governance boards to facilitate budget oversight and management of its investments. Further, VA has approved an IT strategic plan that aligns with priorities identified in the department's strategic plan and has provided multi-year budget guidance to achieve a more disciplined approach for future budget formulation and execution. While these steps are critical to establishing control of the department's IT, it remains too early to assess their overall impact because most of the actions taken have only recently become operational or have not been fully implemented. Thus, their effectiveness in ensuring accountability for the resources and budget has not yet been clearly established. For example, according to Office of Information and Technology officials, the governance boards' first involvement in budget oversight only recently began (in May 2007) with activities to date focused primarily on formulation of the fiscal year 2009 budget and on execution of the fiscal year 2008 budget. Thus, none of the boards has yet been involved in all aspects of the budget formulation and execution processes and, as a result, their ability to help ensure overall accountability for the department's IT appropriations has not yet been fully established. In addition, because the multi-year programming guidance is applicable to future budgets (for fiscal years 2010 through 2012), it is too early to determine VA's effectiveness in implementing this guidance. Further, VA is in the initial stages of developing management processes that are critical to centralizing its control over the budget. However, while the department had originally stated that the processes would be implemented by July 2008, it now indicates that implementation across the department will not be completed until at least 2011. Until VA fully institutes its oversight measures and management processes, it risks not realizing their contributions to, and impact on, improved IT oversight and accountability within the department.
You are an expert at summarizing long articles. Proceed to summarize the following text: JWST is a large deployable, infrared-optimized space telescope intended to be the scientific successor to the aging Hubble Space Telescope. JWST is designed for a 5-year mission to find the first stars and trace the evolution of galaxies from their beginning to their current formation, and is intended to operate in an orbit approximately 1.5 million kilometers—or 1 million miles—from the Earth. With its 6.5-meter primary mirror, JWST will be able to operate at 100 times the sensitivity of the Hubble Space Telescope. A tennis-court-sized sunshield will protect the mirrors and instruments from the sun’s heat to allow the JWST to look at very faint infrared sources. The Hubble Telescope operates primarily in the visible and ultraviolet regions of the electromagnetic spectrum. The observatory segment of JWST includes several major subsystems.subsystems are being developed through a mixture of NASA, contractor, and international partner efforts. See figure 1. The Mid-Infrared Instrument (MIRI)—one of JWST’s four instruments in the Integrated Science Instrument Module (ISIM)—requires a dedicated, interdependent two-stage cooler system designed to bring the optics to the required temperature of 6.7 Kelvin (K), just above absolute zero. This system is referred to as a cryocooler. See figure 2 for a depiction of the cooling system on JWST. The cryocooler moves helium gas through 10 meters (approximately 33 feet) of refrigerant lines from the sun-facing surface of the JWST observatory to the colder shaded side where the ISIM is located. According to NASA officials, a cooler system of this configuration, with so much separation between the beginning and final cooling components, has never been developed or flown in space before. Project officials stated that the MIRI cryocooler is particularly complex and challenging because of this relatively great distance between cooling components located in different temperature regions of the observatory and the need to overcome multiple sources of unwanted heat through the regions before the system can cool MIRI. Specifically, the cooling components span temperatures ranging from approximately 300K (about 80 degrees Fahrenheit, or room temperature) where the spacecraft is located on the sun-facing surface of the telescope to approximately 40K (about -388 degrees Fahrenheit) within the ISIM. Since entering development in 1999, JWST has experienced significant schedule delays and increases to project costs. Prior to being approved for development, cost estimates of the project originally ranged from $1 billion to $3.5 billion with expected launch dates ranging from 2007 to 2011. In March 2005, NASA increased the JWST’s life-cycle cost estimate to $4.5 billion and delayed the launch date to 2013. We reported in 2006 that the cost growth was due to a delay in launch vehicle selection, budget limitations in fiscal years 2006 and 2007, requirements changes, and an increase in the project’s reserve funding—funding used to mitigate issues that arise but which were previously unknown. In April 2006, an Independent Review Team confirmed that the project’s technical content was complete and sound, but expressed concern over the project’s reserve funding, reporting that it was too low and phased in too late in the development lifecycle. The review team reported that for a project as complex as JWST, a 25 to 30 percent total reserve funding was appropriate. The team cautioned that low reserve funding compromised the project’s ability to resolve issues, address risk areas, and accommodate unknown problems. The project was baselined in April 2009 with a life-cycle cost estimate of $4.964 billion—including additional cost reserves—and a launch date in June 2014. Shortly after JWST was approved for development and its cost and schedule estimates were baselined, project costs continued to increase and the schedule was extended. In response to a request from the Chair of the Senate Subcommittee on Commerce, Justice, Science, and Related Agencies to the NASA Administrator for an independent review of JWST—stemming from the project’s cost increases and reports that the June 2014 launch date was in jeopardy—NASA commissioned the Independent Comprehensive Review Panel (ICRP). In October 2010, the ICRP issued its report and cited several reasons for the project’s problems including management, budgeting, oversight, governance and accountability, and communication issues. The panel concluded JWST was executing well from a technical standpoint, but that the baseline funding did not reflect the most probable cost with adequate reserves in each year of project execution, resulting in an unexecutable project. Following this review, the JWST program underwent a replan in September 2011 and was reauthorized by Congress in November 2011, which placed an $8 billion cap on the formulation and development costs for the project. On the basis of the replan, NASA announced that the project would be rebaselined with a life- cycle cost at $8.835 billion—a 78 percent increase—and would launch in October 2018—a delay of 52 months. The revised life-cycle cost estimate included 13 months of funded schedule reserve. In the President’s Fiscal Year 2013 budget request, NASA reported a 66 percent joint cost and schedule confidence level associated with these estimates. A joint cost and schedule confidence level, or JCL, is the process NASA uses to assign a percentage to the probable success of meeting cost and schedule targets and is part of the project’s estimating process. Figure 3 shows the original baseline schedule and the revised 2011 baseline for JWST. As part of the replan in 2011, JWST was restructured and is now a single project program reporting directly to the NASA Associate Administrator for programmatic oversight and to the Associate Administrator for the Science Mission Directorate for technical and analysis support. Goddard Space Flight Center is the NASA center responsible for the management of JWST. See figure 4 for the current JWST organizational chart. In 2012, we reported on numerous technical challenges and risks the project was facing. For example, a combination of numerous instrument delays and leaks in the cryocooler’s bypass valves resulted in the use of 18 of ISIM’s 26 months of schedule reserve and the potential for more schedule reserve to be consumed. Additionally, we identified that the current JWST schedule reserve lacked flexibility for the last three integration and testing events (OTIS, the spacecraft, and observatory), planned for April 2016 through May 2018. While there was a total of 14 months of schedule reserve for all five integration and test events—when problems are more likely to be found—only 7 months were likely to be available for these last three efforts. We also reported that the spacecraft exceeded the mass limit for its launch vehicle and that project officials were concerned about the mass of JWST since the inception of the project because of the telescope’s size and limits of the launch vehicle. In addition to these technical challenges, we reported that the lack of detail in the summary schedule used for JWST’s JCL analysis during the 2011 replan prevented us from sufficiently understanding how risks were incorporated, calling into question the results of that analysis and, therefore, the reliability of the replanned cost estimate. In our December 2012 report, we made numerous recommendations focused on providing high-fidelity cost information for monitoring project progress and ensuring technical risks and challenges were being effectively managed and sustaining oversight. One recommendation was that the project should perform an updated integrated cost/schedule risk, or JCL, analysis. In addition, we recommended that the JWST project conduct a separate review to determine the readiness to conduct integration and test activities prior to the beginning of the OTIS and spacecraft integration and test efforts. NASA concurred with these two recommendations. The JWST project is generally executing to its September 2011 revised cost and schedule baseline. Through the administration’s annual budget submissions, NASA has requested funding for JWST that is in line with the rebaseline plan and the project is maintaining 14 months of schedule reserve to its October 2018 launch date. Cumulative performance data from the prime contractor, which is responsible for more than 40 percent of JWST’s remaining $2.76 billion in development costs, indicate that work is being accomplished on schedule and at the cost expected. Monthly cost and schedule metrics, however, indicate that this performance has been declining since early 2013. The JWST project is maintaining oversight established as part of the replan, for example, by continuing quarterly NASA and contractor management meetings and instituting a cost and schedule tracking tool for internal efforts. The project, however, is not planning to perform an updated integrated cost and schedule risk analysis, which would provide management and stakeholders with information to continually gauge progress against the baseline estimates. The JWST project is executing to the cost commitment agreed to during the September 2011 rebaseline. Since that time, NASA’s funding requests for JWST have been consistent with the budget profile of the new cost rebaseline. For fiscal year 2013, the funding the project received—almost $628 million—matched the agency’s budget request. In addition, the project has been able to absorb cost increases on various subsystems through the use of its cost reserves. Project officials remain confident that they can meet their commitments, and stay within an $8 billion development cost cap recommended by congressional conferees, if funding is provided as agreed during the replan. Performance data from contractors show that planned work was generally being performed within expected costs, but performance has declined over the past year. The project collects earned value management (EVM) cost data from several of its major contractors and subcontractors. EVM data for Northrop Grumman—the project’s prime contractor which is responsible for more than 40 percent of the remaining development costs—indicates that cumulatively from May 2011 work planned is being performed at the cost expected. This measure, known as the cumulative cost performance index (CPI), provides an indication of how a contractor has performed over an extended period of time. The CPI indicates that until June 2013 the contractor performed slightly more work for the cost incurred than what was expected. Recent monthly performance, however, has begun to lower the cumulative index. From December 2012 until June 2013, monthly CPI data, which gives an indication of current performance, show that the contractor has been accomplishing less work than planned for the cost incurred. See figure 5. Although several subsystems are experiencing positive performance, cost overruns on spacecraft-related development activities are contributing to this recent trend. For example, Northrop Grumman has reported negative performance within the spacecraft systems engineering and the electrical power subsystems activities for a 6-month period as of the end of June 2013. We calculate that this contract, which is approximately two-thirds complete, could experience a slight cost overrun based on current data. Northrop Grumman is using cost management reserves to offset the decline in performance, but the JWST project reports that Northrop Grumman is consuming cost reserves at a rate faster than planned. Contractor EVM cost data for ITT/Exelis—which is providing services related to the OTE and OTIS integration and test efforts—also indicate that in recent months the contractor has been accomplishing less work than planned for the cost incurred. ITT/Exelis has experienced cost overruns in each month from March through June 2013, which has lowered the cumulative CPI to 0.98. Project officials told us that the ITT/Exelis has sufficient cost reserves to offset the recent cost overruns and that a cumulative CPI of 0.98 is within the range of acceptable performance. Best practices indicate that a CPI of 1.0 or above is favorable. We found small cost overruns across many elements of the work being performed by ITT/Exelis, similar to the analysis performed by the project. Based on our analysis of EVM data through the end of July 2013, we estimate that this contract could experience a small cost overrun. As of July 2013, ITT/Exelis had completed a little more than one- third of the planned work for this contract and used more than 44 percent of available management reserves from October 2012 to July 2013. In addition to the work being performed by contractors, the JWST project also performs development work internally at NASA’s Goddard Space Flight Center. For example, the project internally manages the ISIM development effort that is expected to cost over $1 billion, which includes the first of five major integration and test efforts. The current estimated cost at completion for ISIM as calculated by the project has risen more than $109 million—a 9.8 percent increase—since the 2011 rebaseline of the project. The cost overrun is primarily because of late instrument deliveries and is being accommodated through the use of project reserves. The JWST project is executing to the baseline schedule commitment agreed to during the September 2011 rebaseline. The JWST project continues to report 14 months of schedule reserve to its October 2018 launch date, pending a review of the need to use schedule reserve based on the impacts of the government shutdown in October 2013. See figure 6. We found in 2012 that the 7 months of schedule reserve held by the OTE subsystem will likely be used during its integration and test, prior to delivery to OTIS. If the OTE integration and test effort uses schedule reserve beyond those 7 months, it will reduce the amount of schedule reserve available for the last three integration and test efforts. Northrop Grumman officials said that the OTE integration and test effort is very sequential and does not offer much flexibility to allow for changes to the process flow. The integration and test of OTE must be complete for the OTIS integration effort to begin on schedule. In December 2013, the project indicated that the 14 months of total schedule reserve held by the project was being assessed due to delivery problems with portions of the observatory’s sunshield and the impact of the government shutdown. Because of instrument and hardware delays and non-availability of a test chamber, the project now reports 7 months of schedule reserve associated with the ISIM integration and test effort before it is needed for integration with the OTE subsystem to form OTIS. Previously, the project reported that ISIM had almost 8 months of schedule reserve, which did not account for the delayed start of the first scheduled cryo-vacuum test— in which a test chamber is used to simulate the near-absolute zero temperatures in space. The current 7 months of schedule reserve for the ISIM integrations and test effort does not include the impact of any potential delays due to the government shutdown in October 2013, which was still being determined in mid-December 2013. The first cryo-vacuum test was considered a risk reduction test by the project because it did not include two of the project’s four instruments and was to test procedures and the ground support equipment to be used in later cryo-vacuum tests of ISIM. During the replan, this test was scheduled to begin in February 2013, but was delayed until August 2013 because of several issues, including availability of the test chamber and delays in development and delivery of a radiator for the harness that holds electrical wiring. Project officials said they will adjust the ISIM schedule to minimize the schedule impact by performing some activities concurrently, delaying some activities until after the first cryo-vacuum test, and removing some activities. They added that a recently approved September 2013 revision to the ISIM schedule only reduced schedule reserve by 1 week and no additional risk will be incurred based on these changes to the ISIM schedule. The two subsequent cryo-vacuum tests, however, have slipped up to 2 months in the latest revision to the ISIM schedule, although project officials state that the April 2016 completion date for ISIM testing and delivery to the OTIS integration and test effort remains unchanged. According to the JWST program manager, however, the first cryo-vacuum test was in process when the government shutdown happened and, although many of the testing goals were accomplished through prioritization of test activities, the test was terminated once the ISIM staff resumed work and some activities were not accomplished. As a result, he said that the project would incur more risk in the second cryo-vacuum test that is currently scheduled to start in April 2014. In addition to maintaining up to 14 months of schedule reserve, the project is generally meeting the milestones it reports to Congress and other external entities. See table 1. These milestones include technical reviews prior to the spacecraft critical design review, hardware tests, and the delivery of key pieces of hardware. As shown in the table, the project has completed the majority of its milestones as planned and has deferred six milestones in the past 2 fiscal years. Among the deferred milestones are delays to completion of the first ISIM cryo-vacuum test and delivery of flight hardware for the MIRI instrument cryocooler. EVM schedule data for Northrop Grumman indicates that the cumulative planned work since the new schedule estimate was agreed upon is being performed as expected. This measure, known as the cumulative schedule performance index (SPI), shows consistent performance at the aggregate level for the past year. However, monthly SPI metrics indicate a slight decline in performance in 9 of the 12 months between August 2012 and July 2013. See figure 7. The data from Northrop Grumman in recent months indicates that work is slightly behind schedule for the spacecraft subsystem. The JWST project has maintained the oversight activities put in place following the replan and added additional oversight mechanisms. For example, some of the oversight activities implemented as part of the 2011 replan that are still ongoing include The JWST Program Director is holding monthly meetings with the The JWST Program Director is holding quarterly meetings with Northrop Grumman senior management and the Goddard Space Flight Center Director, and The JWST Project Spacecraft Manager has relocated to provide an on-site presence at the Northrop Grumman facility. The project also has implemented some new oversight mechanisms since the time of our last review in 2012, according to JWST officials. For example, the project is implementing a tool to continually update the cost estimate for the internal work on the ISIM development activities. In addition, the project is working with the Space Telescope Science Institute to design a tool, similar to EVM, to monitor progress on ground systems development. The project also has added a financial analyst at the Northrop Grumman facility to provide the spacecraft manager and the project ongoing and increased financial insight of the work being performed by Northrop Grumman and to analyze monthly data prior to the monthly project business meetings with the contractor. In response to our prior recommendation, the project has modified its schedule to add an independent review prior to the beginning of the OTIS and spacecraft integration and test efforts. Despite these improvements in oversight, JWST project officials said that they are not planning to perform an updated integrated cost/schedule risk analysis—or joint cost and schedule confidence level (JCL) analysis as GAO’s cost estimating best practices call for we recommended in 2012. a risk analysis and risk simulation exercise—like the JCL analysis—to be conducted periodically through the life of a program, as risks can materialize or change throughout the life of a project.updated on a regular basis, the cost estimate cannot provide decision makers and stakeholders with accurate information to assess the current status of the project. As we recommended in 2012, updating the project’s JCL would provide high-fidelity cost information for monitoring project progress. While NASA concurred with our recommendation, project Unless properly officials have subsequently stated that they do not plan to conduct an updated JCL. A program official stated that the project performs monthly integrated programmatic and cost/schedule risk analyses using various tools and that the information that these tools provide is adequate for their needs. For example, the JWST project conducts on-going risk identification, assigning probability and dollar values to the risks, tracks actual costs against planned costs to assess the viability of current estimates, uses earned value management, and performs schedule analyses. Moreover, while the JWST program manager acknowledged that NASA concurred with our recommendation, he said that the agency interpreted that it would be sufficient to do these lower level analyses instead of performing an updated JCL. NASA, however, has not addressed the shortcomings of the schedule that supports the baseline itself. For example, we found that the lack of detail in the summary schedule used for JWST’s last JCL in May 2011 prevented us from sufficiently understanding how risks were incorporated, therefore calling into question the results of that analysis. Since the JCL was a key input to the decision process of approving the project’s new cost and schedule baseline estimates, we maintain that the JWST project should perform an updated JCL analysis using a schedule that should now be much more refined and accurate and has sufficient detail to map risks to activities and costs in addition to the other analyses they currently perform. Doing so could help increase the reliability of the cost estimate and the confidence level of the JCL. Furthermore, risk management is a continuous process that constantly monitors a project’s health. The JWST project is still executing to a plan that was based on the JCL performed in May 2011. The risks the project is currently facing are different than those identified during the JCL process more than 2 years ago, and will likely continue to evolve as JWST is still many years from launch. The JWST project has made progress in addressing some technical risks; however, other technical challenges exist that have caused development delays and cost increases at the subsystem level. The project and its contractors have nearly addressed a problematic valve issue in the MIRI cryocooler that has been a concern for several years, the OTE and ISIM development efforts have made progress over the past year, and both the project and contractors have remedied the spacecraft mass issue that we reported on last year. The project has other technical issues, however, that still need to be resolved. For example, there is a separate and significant performance issue with the cryocooler and though project officials state that they understand the issue, the subcontractor is still working to validate the changes made to the cryocooler to address the issue. These issues with the cryocooler have led to an increase of about 120 percent in cryocooler contract costs and the execution of the remaining cryocooler effort will be challenging. In addition, the OTE and ISIM efforts are still addressing risks that threaten their schedules. Despite progress in some areas, the cryocooler development effort has been and remains a technical challenge for the project. The cryocooler subcontractor has addressed much of the valve leak issue that we reported on in 2012, and all but the last of the replacement valves, which were produced with new seal materials, have successfully completed testing. While resolution of this issue will be a positive step for the project, other, still unresolved issues with the cryocooler have arisen that have required additional cost and schedule resources to address. Specifically, a key component of the cryocooler underperformed prior tests of this technology by about 30 percent. In addition, both the Jet Propulsion Laboratory (JPL)—which awarded the cryocooler subcontract—and the subcontractor were focused on addressing the valve issue, which limited their attention to the cooling underperformance issue. In late 2012, the cryocooler subcontractor reported that it would be unable to meet the cryocooler schedule. The subcontractor is working toward a revised test schedule, agreed upon in April 2013, which delays acceptance testing and includes concurrent testing of hardware. In August 2013, the cryocooler subcontract was modified to reflect a 69 percent cost increase. Additionally, the number of subcontractor staff assigned to the cryocooler subcontract has increased from 40 to approximately 110, which accounts for a significant portion of the cost increase. This was the second time in less than 2 years that the cryocooler subcontract was modified. Cumulatively, the cryocooler subcontract value has increased by about 120 percent from March 2012. Various issues may have contributed to the current problems with the cryocooler. For example, according to project and JPL officials they had not verified the cryocooler cost and schedule estimates provided by the subcontractor prior to the project establishing new baseline cost and schedule estimates in 2011. Doing so may have allowed them to ensure adequate resources were accounted for in the new baseline estimates. JPL officials stated that the subcontractor proposal was verified prior to the completion of the March 2012 cryocooler replan. The subcontractor, however, reported that the 2012 replan did not include cost or schedule allowance for rework should additional problems arise, which did happen. In addition, despite erratic and negative EVM data from the subcontractor immediately following the March 2012 cryocooler replan, an in-depth review was not initiated until 9 months later by the cryocooler subcontractor. JPL officials stated that, during this time, they were performing analysis of the EVM data and the technical progress of the subcontractor and provided the results of their analysis to the project. Finally, the project had not followed key best practices since early in development, which left it at an increased risk of cost and schedule delays. For example, best practices call for testing of a model or prototype of a critical technology in its flight-like form, fit, and function and in a simulated realistic or high fidelity lab environment by its preliminary design review. While the subcontractor tested a demonstration model of the cryocooler in such an environment and the project assessed the technology as mature in 2008, a project official acknowledges that the demonstration model’s mechanical design was different than what would be used in space and, according to that official, those differences led to the loss of performance between the demonstration model and the current cryocooler. In addition, only 60 percent of the cryocooler’s expected design drawings were released as of the mission critical design review—well below the best practice standard of 90 percent drawings released by critical design review—indicating that the project moved forward without a stable cryocooler design as well as an immature cryocooler technology, which increases risk. The execution of the remaining cryocooler schedule will continue to be challenging as the performance issue is not resolved, the revised schedule is optimistic, the subcontractor has identified significant risks not incorporated in the rebaseline, and there are risks associated with the revised testing approach. The cryocooler subcontractor has developed a separate verification model, which is now being used to validate that the cryocooler redesign will address the underperformance. This step is important because, according to the cryocooler subcontractor program manager, the internal structures of the cryocooler component are intricate and once a unit is completed the internal structure cannot be modified. Thus, when issues arise, such as use of incorrect parts or unexpected underperformance, a new unit must be built rather than simply changing parts on the underperforming cryocooler component. Testing of the verification model, which will give an indication of whether the performance issue has been rectified and a new flight model can be built, was scheduled to be complete in October 2013, but has been delayed. The subcontractor project manager reports that issues were found with processes used to assemble the verification model that must be resolved before testing resumes, which is not expected until at least late December 2013. This delay may reduce the amount of schedule margin available to the overall cryocooler effort. The cryocooler schedule—agreed upon in April 2013—was optimistic, according to the cryocooler subcontractor program manager. Shortly after the new schedule was put in place, he told us that he had low confidence that the subcontractor would be able to meet this schedule based on the development issues mentioned above. In addition, the JPL scheduler for the cryocooler said that he had only moderate confidence of the subcontractor’s ability to meet this schedule. In line with their concerns, the cryocooler subcontractor recently depleted all of its schedule reserve for deliveries to JPL prior to the start of acceptance testing. The cryocooler subcontractor also identified other risks that could impact its execution of the subcontract, but that were not included as part of the rebaseline plan in the modified subcontract. The project retained financial responsibility for addressing those risks, should they arise, at the project level by identifying over $8 million in cost reserves in fiscal years 2014 and 2015. However, some of these risks could require significantly more than $8 million to address. For example, the cryocooler subcontractor program manager stated that some of these risks, if realized, could take a year to mitigate. As of September 2013, delivery dates agreed to in April 2013 for all of the major flight and spare cryocooler components have been delayed, all six weeks of schedule reserve being held at the cryocooler subcontractor had been exhausted, and the start of acceptance testing at JPL has been delayed. Any further delays will have to be accommodated through the use of 12 weeks of schedule reserve held by JPL. The cryocooler subcontractor also recently began reporting EVM data based on the latest cost and schedule estimates and, in line with the delays mentioned above, this data already shows that work is costing more and taking longer than planned. JPL’s schedule reserve also has to support any issues that arise during acceptance and end-to- end testing of the cryocooler hardware prior to delivery to the spacecraft integration and test effort. In an effort to reduce this risk, the project reordered the integration and test schedule. This removed some, but not all, of the cryocooler component testing schedule risk, which may limit the project’s ability to address issues that arise during component testing. Specifically, two major spare components of the cryocooler will still be in acceptance testing when spacecraft integration and test begins in April 2016, which is also a risk to the spacecraft integration and test schedule. For example, if a particular cryocooler component fails during one test and a spare component is still undergoing acceptance testing, then the test schedule may be delayed waiting for repairs to be made to the component or for the spare component to be available. Northrop Grumman has made progress on the OTE, but the project expects the contractor to use its current schedule reserve and the OTE is facing risks that may impact the schedule if they are realized. Progress has been made over the past year in fabricating the OTE support structure, which holds the mirrors and ISIM and connects all the pieces of the observatory. Specifically, all of the support structure sections have been completed and fully integrated and the structure has entered cryovacuum testing. The project is tracking an issue with release mechanisms holding the spacecraft and the OTE together while stowed within the launch vehicle and used during the deployment of the telescope after launch. Currently the mechanisms are causing excessive shock vibration when released. According to a NASA official, the project and the contractor are evaluating potential solutions which include changes to the design of the release mechanism, using damping materials to lessen the impact to the spacecraft, and testing to see if the shock requirement can be relaxed. The project has delayed the release mechanism design review until January 2014—after the spacecraft critical design review—while it works to mitigate the issue with contractors. Project officials stated the results of this component level design review will be evaluated prior to a larger mission review to be held later in 2014. In December 2013, the project was also assessing the possibility that portions of the observatory’s sunshield may be delivered up to 3 months late, which could impact the amount of schedule reserve being held by the project. The project indicates that it is considering options by the contractor to recover some of that potential schedule delay. The project has made progress on various portions of the ISIM as well. For example, two of the four instruments have been integrated into the ISIM for testing and fabrication of replacement near infrared detectors used in three of the four instruments—which we reported in 2012 may need to be replaced—is ahead of schedule. Prior schedule conflicts with another NASA project, however, delayed the start of the ISIM integration and test effort and instrument and component delays are further threatening the ISIM integration and test schedule which may lead to additional cost increases. The project has already replanned the ISIM schedule flow due, in part, to delays with the Near-Infrared Camera (NIRCam) and Near-Infrared Spectrograph (NIRSpec) instruments. Specifically, the NIRSpec instrument and NIRCam’s optics were delivered more than a year behind schedule. NIRSpec completed environmental testing and was delivered to Goddard in late September 2013. An electronics component of the NIRCam instrument, however, failed functional testing following a vibration test possibly due to manufacturing defects. The contractor has developed an approach to screen similar components to verify whether those components have similar anomalies. If the components pass the screening process, then environmental testing will continue with a spare in place of the component that malfunctioned. If all of the components show similar anomalies, they will be restricted from vibration tests and used in other testing until replacement components are ready. This issue may impact the already delayed start of the second and third ISIM cryo-vacuum tests, which would further compress the ISIM integration and test schedule or require the project to use some of ISIM’s schedule reserve. Because the ISIM schedule has already been compressed, the project will have less flexibility should any issues or delays arise during this effort. The project is covering the current ISIM- related cost increase—9.8 percent—primarily with funding reserves. Extending the length of time needed to conduct the ISIM integration and test effort, should there be further delays, would require maintaining test personnel and facilities longer than planned, which may lead to further cost increases. Northrop Grumman has successfully addressed the spacecraft mass issue that we reported on in 2012 and project officials state that they are comfortable with the observatory mass margin as the project heads into multiple major integration and test efforts, despite the mass margin being lower than Goddard standards. In December 2012, we reported that the spacecraft was more than 200 kilograms over its mass allocation. In November 2013, Northrop Grumman officials stated that the spacecraft was under its mass allocation at that time. Since December 2011, both the contractor and the project made mass reduction a priority and the contractor currently has margin available to address future issues that may require additional mass to solve. The project’s current overall mass margin is approximately 7.7 percent, which does not include 90 kilograms of additional mass allocation the project received in 2013 from the launch vehicle provider. This is lower than the Goddard standard of 15 percent mass margin at this phase of development. According to project officials, they applied the Goddard standard at the subsystem level rather than at the observatory level due to JWST’s complexity, which allowed them to maintain a lower overall observatory mass margin. They added that the observatory and its component elements have an acceptable amount of mass margin as the project enters its major integration and test efforts and, while they will maintain standard mass controls to avoid unnecessary growth, they do not expect mass margins to be a significant concern going forward. We plan to continue to monitor mass margin in future reviews as the project proceeds through integration and test efforts. Several current near-term funding constraints such as low cost reserves, a higher-than-expected rate of spending, and potential sequestration impacts are putting at risk NASA’s ability to meet its cost and schedule commitments for JWST. In September 2013, project officials reported that while they are making good technical progress, the level of cost reserves held by the project in fiscal year 2014 had become the top issue facing the project and may require them to defer future work. Although not currently identified as an issue by the project, a significant portion of fiscal year 2015 project-held cost reserves have also already been allocated. This does not take into account reserves held by the JWST program at NASA headquarters in fiscal years 2014 and 2015 that can be used to supplement reserves held by the project. However, fiscal year 2014 program reserves are minimal compared to future years. As of September 2013, the project has allocated approximately 60 and 42 percent of its reserves in fiscal years 2014 and 2015, respectively. See figure 8. The need to allocate a significant portion of cost reserves in fiscal year 2014 and 2015 has been driven primarily by the technical issues with the MIRI cryocooler. Specifically, the subcontract modification resulting from the cryocooler replan required the allocation of over $25 million of cost reserves in fiscal year 2014 and 2015. After allocation of these cost reserves, the project began tracking the risk of low fiscal year 2014 cost reserves. Project officials report that the project’s low reserve posture in fiscal year 2014 may require them to defer work to future years. Specifically, because the project continues to maintain 14 months of funded schedule reserve, it may begin using some of that schedule reserve to conduct work later or allow work to take longer than planned. There are risks associated with this approach, however. For example, prior to the project’s replan in 2011, low cost reserves and technical challenges forced project management to defer planned work into future years. This ultimately led to increased costs for the deferred work and a schedule that was unsustainable. Much of the remaining work on JWST involves the five major integration and test efforts—which began in fiscal year 2011— during which work is often sequential in nature and cost and schedule growth typically occurs. Depleting schedule reserve now could impact project officials’ ability to address technical risks or challenges not currently identified or realized, but that will likely arise during this phase. Project officials said that they would like to strike a balance between using remaining cost reserves and having to utilize schedule margin to complete planned work and address currently unknown technical challenges, but their goal is to use as little schedule margin as possible in fiscal year 2014. Northrop Grumman has also identified issues with the adequacy of its cost management reserves in fiscal year 2014. The project shares this concern given that Northrop Grumman’s cost reserves are eroding faster than anticipated. As of October 2012, the contractor held more than $244 million in cost management reserves for the remainder of the contract, but has used almost 24 percent of those management reserves since then. The approximately $185 million in cost management reserves Northrop Grumman has available as of September 2013 represents the total amount of reserves available through the remainder of the contract— almost 6 years—and not how much is available for use specifically for fiscal year 2014. The contract modification for the 2011 replan was signed in December 2013 and, according to the Northrop Grumman program manager, the amount of management reserve available will likely increase by more than $45 million once budget distributions are completed by the end of January 2014. In June 2013, Northrop Grumman had identified up to $80 million in potential risks for fiscal year 2014. Project officials said that Northrop Grumman will sometimes fund new contract requirements for future fiscal years with current year cost reserves. These officials added that they are in the process of determining whether the rate Northrop Grumman is spending cost reserves is a result of additional requirements or because of performance issues. According to JWST project analysts, Northrop Grumman cost management reserves also remain a challenge in fiscal year 2015 when compared to the potential threats. The JWST project manager said that the project could rephase some planned Northrop Grumman cost management reserves from future years to fiscal year 2014 instead, but that would require the project to use some of its fiscal year 2014 cost reserves, which as noted are already constrained. As noted earlier, the JWST Program at NASA headquarters maintains another set of cost reserves that could be used to help in situations such as this, but the bulk of these reserves will not be available until fiscal year 2015. The project’s rate of spending in fiscal year 2013 could also be a significant issue if it continues into fiscal year 2014 and officials have begun tracking the rate of spending as a risk. The project spent approximately $40 million more than planned in fiscal year 2013. According to program officials, the amount of this overage is becoming significant not because of a lack of funds in fiscal year 2013, but because the fiscal year 2014 budget and project cost reserves are constrained. Project officials said that they planned to carry over funding from fiscal year 2013 to support approximately 2½ months of work to help fund contracts and ensure continued operations during a potential continuing resolution or other periods of funding uncertainty. If the project were to receive its full funding allocation for fiscal year 2014 at the level planned, this 2013 money would supplement the money available to the project in 2014. But if the current rate of spending is sustained, the project would only carry over enough 2013 money to fund the project for about 7 to 8 weeks into fiscal year 2014. The lower amount of funding carried over will also cause the project to have less available to supplement shortfalls in future years. For example, the JWST program manager told us that Northrop Grumman has requested more funding in fiscal year 2014 than the amount planned. Program officials noted that if the project continues to spend in fiscal year 2014 at a rate experienced during the latter part of fiscal year 2013, it may not be able to carry any funds into fiscal year 2015 as planned. Project officials, however, indicate that they are confident that they will carryover funds into fiscal year 2015. Our review of the data found that the project’s increased spend rate in fiscal year 2013 is due mainly to additional resources necessary for the ISIM due to late hardware deliveries, the cryocooler effort, and the Northrop Grumman effort to prepare for the spacecraft critical design review in January 2014. NASA’s ability to remedy these issues will likely be significantly hindered by the potential impacts from sequestration and competing demands from other major projects. For example, while NASA officials report that the agency was able to absorb the sequestration-related reductions in fiscal year 2013 with relatively no impact on its major projects, including JWST, they indicate that the agency cannot sustain all of its long-term funding commitments at sequester levels in fiscal year 2014 and beyond. Importantly, the JWST project recently began tracking a risk for the budget uncertainty due to sequestration. The risk outlines that there is a potential cut to the JWST budget starting in fiscal year 2014, which could adversely affect the execution of the project’s current plan and potentially jeopardize the October 2018 launch date. The program office indicates that NASA headquarters directed JWST to plan for its fiscal year 2014 budget to be consistent with the replan. This direction by NASA could have an impact on other major NASA projects. In interviews for several other major NASA projects, officials informed us that they have less than adequate funding in fiscal year 2014 and some have requested that the agency rephase funds from later years to fiscal year 2014 to address the issue. If additional funds are required and prioritized for JWST, there could be a potentially significant impact on these and other projects within the agency that are already reporting funding issues in fiscal year 2014. The reliability of the JWST integrated master schedule is questionable because some of the 23 subordinate schedules synthesized to create it are lacking in one or more characteristics of a reliable schedule. Schedule quality weaknesses in the JWST subsystem schedules transfer to the integrated master schedule. We found a similar result this year consistent with our analysis in 2012 in which weaknesses in the two subsystem schedules we analyzed undermined the reliability of the integrated master schedule. According to scheduling best practices, the success of a program depends in part on having an integrated and reliable master schedule that defines when work will occur, how long it will take, and how each activity is related to the others that come both before and after it. If the schedule is dynamic, planned activities within the schedule will be affected by changes that may occur during a program’s development. For example, if the date of one activity changes, the dates of its related activities will also change in response. The master schedule will be able to identify the consequences of changes and alert managers so they can determine the best response. The government project management office, in this case the JWST project office at Goddard Space Flight Center, is ultimately responsible for the integrated master schedule’s development and maintenance. The quality and reliability of three selected subsystem schedules we examined for this review—ISIM, OTE, and cryocooler—were inconsistent in following the characteristics of high-quality, reliable schedules. Using the 10 best practices for schedules, we individually scored and evaluated the schedules for these subsystems. We then grouped the best practices into one of four characteristics: comprehensive, well-constructed, credible, and controlled. The individual best practice scores within each characteristic were then combined to determine the final score for each characteristic. See appendix III for more detailed information on each characteristic and its corresponding best practices. The ISIM and OTE schedules had more strengths than weaknesses, substantially meeting three of four characteristics of a reliable schedule. The cryocooler schedule demonstrated weaknesses in both of the characteristics we examined. We selected these three subordinate schedules because they represent the significant portion of ongoing work for the project and reflect work by the project, the prime contractor, and a subcontractor. Table 2 identifies the results of each of the selected JWST subordinate schedules and their corresponding best practices sub scores. Of the four characteristics of a reliable schedule that we assessed for the ISIM schedule, we found that three substantially met the criteria— comprehensive, well-constructed, and controlled—while the credible characteristic was partially met. The strengths of the ISIM schedule were that it captured all activities in manageable durations with their proper sequence, identified the longest continuous sequence of activities in the schedule, known as its critical path, and estimated reasonable amounts of total float, defined as the time activities can slip before delaying key delivery dates. NASA also maintains a baseline schedule that is regularly analyzed and updated as progress is made. However, the schedule lacked a schedule risk assessment—a best practice that gives decision makers confidence that the estimates are credible based on known risks and allows management to account for the cost of a schedule slip when developing the life-cycle cost estimate. Without a schedule risk assessment decision makers may not obtain accurate cost impacts when schedule changes occur. Officials noted that while a schedule risk assessment was not performed on the ISIM schedule itself, the schedule was included as a part of the overall JWST JCL analysis, and subsequent cost and schedule estimate, conducted during the project replan in 2011. However, our analysis of the 2011 JCL indicated that the estimate’s accuracy, and therefore the confidence level assigned to the estimate, was reduced by the quality of the summary schedule used for the JCL because it did not provide enough detail to determine how risks were applied to critical project activities. Of the four characteristics of a reliable schedule that we assessed for the OTE schedule, we found that the comprehensive characteristic was fully met, credible and controlled characteristics were substantially met, and the well-constructed characteristic was partially met. The strengths of the OTE schedule were that it captured all activities in manageable durations with their proper sequence, identified the resources needed for each activity, linked activities to the final deliverables the work in the schedule is intended to produce, and accurately reflected dates presented to management in high-level presentations. Northrop Grumman, the creator and manager of the schedule, also maintains a baseline schedule that is regularly analyzed and updated as progress is recorded by schedule experts. However, while Northrop Grumman has identified a critical path, our analysis was not able to confirm that this path described activities in the schedule that were truly driving the key delivery date for the OTE, which is the delivery of the OTE to the OTIS testing and integration at Goddard Space Flight Center on April 28, 2016. Identifying a valid critical path is essential for management to identify and focus upon activities which will potentially have detrimental effects on key project milestones and deliverables if they slip. In addition, we found that one-third of the remaining activities and milestones had over 200 days of total float. This means that, according to the schedule, these activities could be delayed 9 working months without impacting the key delivery date. Realistic float values allow managers to see the impact of a delayed activity on future work. However, unrealistic estimates of float make it difficult to know the amount of time one event can slip without impacting the project finish date. In addition, incorrect float estimates will result in an invalid critical path. Northrop Grumman officials agreed with our assessment but noted the high values of total float are due to their planning process which only details out the schedule in 6 month increments. Activities beyond the detailed planning window of the schedule have high float and those estimates of float will become more reasonable as the schedule is planned in detail. However, best practices state that all activities in the schedule, even far-term planning packages, should be logically linked in such a way as to portray a complete picture of the program’s available float and its critical path. Finally, a schedule risk assessment has not been conducted on the OTE schedule since 2011. Northrop Grumman officials stated that they are not contractually required to periodically conduct a schedule risk assessment. However, as with the ISIM, without a schedule risk assessment decision makers may not have accurate cost impacts when schedule changes occur. Of the two characteristics of a reliable schedule that were assessed for the cryocooler schedule, the well-constructed and credible characteristics were both partially met. The strengths of the cryocooler schedule were that it had a logical sequence of activities with few missing logic links, and few issues with incorrect logic that might impair the ability of the schedule to forecast dates dynamically. Despite these strengths, two of the ultimate goals of a reliable schedule— determining a valid critical path and realistic total float—were only partially achieved. Officials stated that the schedule is used to manage critical paths to six major hardware deliveries, or key delivery dates. However, we could not determine how the schedule is used to identify and present those paths to management. In addition, the use of date constraints in 19 activities within the schedule helps determine the remaining total float to some deliveries, but causes an overabundance of activities to appear as critical, which interferes with the identification of the true project-level critical path. We also found that while the schedule accurately reflected some of the delays the project is currently experiencing, its schedule appears to be overly flexible in some cases, such as having activities with over 500 days—or over 2 working years—of total float. Incorrect float estimates may result in an invalid critical path and an inaccurate assessment of project completion dates. The schedule also lacks a complete and credible schedule risk analysis, without which managers cannot determine the likelihood of the project’s completion date, how much total schedule risk reserve funding is needed, risks most likely to delay the project, or how much reserve funding should be included for each individual risk. Northrop Grumman officials, who manage the schedule and the project, stated that a schedule risk analysis was performed in March 2013, but the results were not used by JPL management who oversees the contract. The results of the schedule risk analysis may help JPL determine the probability of meeting key dates or how much schedule contingency is needed. Officials provided us examples of the schedule risk analysis output, but we were not able to confirm their validity because documentation was not available on the data, risk, or methodologies. In addition to the lack of documentation, because we found the schedule to be only partially well- constructed, we cannot be sure that the results of the schedule risk analysis are valid. Given the weaknesses noted above, if the schedule risk analysis is to be credible, the program must have a quality schedule that reflects reliable logic and clearly identifies the critical path before a schedule risk analysis is conducted. If the schedule does not follow best practices, confidence in the schedule risk analysis results will be lacking. Without the schedule risk analysis, the project office cannot rely on the schedule to provide a high level of confidence in meeting the project’s completion date or identify reserve funding for unplanned problems that may occur. The JWST project has maintained its cost and schedule commitments since its 2011 replan, has continued to make good technical progress, and has implemented and enhanced efforts to improve oversight. Nevertheless, inherent risks continue to make execution of the JWST project challenging and near-term indicators show that the project is currently facing challenges that need to be addressed primarily by increased reserves and progress tracked using the proper tools. Our report, however, indicates that the project may not have the appropriate resources and high fidelity information to ensure execution as planned and provide realistic information to decision makers and other stakeholders. For example, near-term cost reserves are constrained and the project is spending at a higher rate than planned. Without adequate cost reserves in the near-term and if its increased rate of spending continues, the project may need to defer planned work and delay the resolution of future and yet unknown threats. These actions could put the project on a course to repeat past missteps that led to congressional intervention and the institution of a cap on development costs. In addition, the effect sequestration would have on available funding for the project in fiscal year 2014 and beyond is unknown at this point, but could potentially compound this issue. As a result, NASA may need to make difficult decisions about funding JWST adequately at the expense of other, already cash-strapped projects. Importantly, JWST project officials may not have the necessary information to determine the impacts of any resource issues because the project currently lacks a reliable integrated master schedule due to weaknesses we found in several subschedules. Without a reliable schedule, project officials cannot accurately manage and forecast the impacts of changes to the schedule that will likely come about during the integration and testing periods. Despite these concerns, the JWST project has declined to take adequate steps to address our recommendation to perform an updated cost and schedule risk analysis—or JCL—that is based on current risks and a reliable schedule. Unless properly updated to include a reliable schedule that incorporates known risks, particularly if NASA is faced with additional resource constraints through the continuation of sequestration, the cost estimate will not provide decision makers with accurate information to assess the current status of the project. To help ensure that NASA officials are making decisions using up to date and reliable information about the JWST project, Congress should consider requiring the NASA Administrator to direct the JWST project to conduct an updated joint cost and schedule confidence level analysis that is based on a reliable schedule and current risks. We recommend that the NASA Administrator take the following two actions: In order to ensure that the JWST project has sufficient available funding to complete its mission and meet its October 2018 launch date and reduce project risk, ensure the JWST project has adequate cost reserves to meet the development needs in each fiscal year, particularly in fiscal year 2014, and report to Congress on steps it is taking to do so, and In order to help ensure that the JWST program and project management has reliable and accurate information that can convey and forecast the impact of potential issues and manage the impacts of changes to the integrated master schedule, perform a schedule risk analysis on OTE, ISIM, and cryocooler schedules, as well as any other subschedules for which a schedule risk analysis was not performed. In accordance with schedule best practices, the JWST project should ensure that the risk analyses are performed on reliable schedules. NASA provided written comments on a draft of this report. These comments are reprinted in appendix IV. In responding to a draft of this report, NASA concurred with our two recommendations; however, in some cases it is either not clear what actions NASA plans to take or when they will complete the action to satisfy the intent of the recommendations. NASA officials concurred with our recommendation to ensure the JWST project has adequate cost reserves to meet the development needs in each fiscal year, particularly in fiscal year 2014, and report to Congress on steps it is taking to do so. In their response, the Acting JWST Program Director cited NASA and the administration’s request to Congress to appropriate the full JWST replan level funding for fiscal year 2014, which includes the level of unallocated future expenses, or cost reserves, established in the replan. He also commented that NASA conducts monthly reviews to evaluate risks and associated impacts to funding in order to ensure that adequate cost reserves are available in each fiscal year. We acknowledge in our report that the JWST project has been fully funded at levels commensurate with the 2011 baseline through fiscal year 2013. However, cost reserves approved for the project during the 2011 replan were based on the risks known at that time. The events of fiscal year 2013 have weakened the project’s financial posture and flexibility the project has to address any potential technical challenges going forward into fiscal year 2014 and beyond. In addition, NASA’s response does not indicate how the agency plans to report to Congress the steps it is taking to ensure that the JWST project has adequate cost reserves to meet its October 2018 launch date. We maintain that NASA should provide more detail to Congress on its plans given the already constrained cost reserve posture the project has early in fiscal year 2014 and past issues with low levels of cost reserves that forced the project to defer work, which led to significant cost increases and schedule delays. NASA officials concurred with our recommendation to perform a schedule risk analysis on OTE, ISIM, and cryocooler schedules, as well as any other subschedules where a schedule risk analysis was not performed and that, in accordance with schedule best practices, the risk analyses are performed on reliable schedules. The Acting Program Director stated NASA will conduct probability schedule risk analyses on the OTE, ISIM, and cryocooler schedules by the end of calendar 2014 using NASA best practices. This is a positive step, given that our previous work has found that GAO and NASA best practices for scheduling are largely consistent. The Acting Program Director also stated that NASA will conduct the same analyses for other schedules lacking a risk analysis. However, no deadline was mentioned for when these analyses will be accomplished or for how many schedules will be affected. Having reliable schedules sooner will provide management with more timely and accurate information on which to make decisions. If the schedule risk assessments are not completed until after 2014, the project will have less than 4 years until launch to utilize the information these risk analyses can provide. Given that we have found reliability issues with the project’s schedules for the second year, improving the current schedules to meet best practices is important to provide management with improved tools to better understand the schedule risks and manage the project. We are sending copies of the report to NASA’s Administrator and interested congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives were to assess (1) the extent to which the James Webb Space Telescope (JWST) project is meeting its cost and schedule commitments and maintaining oversight established as part of the project’s replan, (2) the current major technological challenges facing the JWST project, (3) the extent to which cost risks exist that may threaten the project’s ability to execute the project as planned, and (4) the extent to which the JWST project schedule is reliable based on best practices. In assessing earned value management (EVM) data from several contractors and subcontractors and the project’s schedule estimate, we performed various checks to determine that the data provided was reliable enough for our purposes. To assess the extent to which the JWST project is meeting its cost and schedule commitments and maintaining oversight, we reviewed project and contractor documentation, analyzed the progress made and any variances to milestones established during the project’s replan in 2011, and held interviews with project, contractor, and Defense Contract Management Agency officials. We reviewed project monthly status reviews, documentation on project risks, and budget documentation. We examined and analyzed EVM data from several contractors and subcontractors. The EVM data reviewed included monthly contractor performance reports and analysis performed by the JWST project on this information. For our analysis, we entered only high-level monthly contractor EVM data into a GAO-developed spreadsheet, which includes checks to ensure the EVM data provided was reliable enough for our purposes. We also reviewed the project’s analysis of the estimate at completion for internal work being performed on the Integrated Science Instrument Module. We interviewed program and project officials at NASA headquarters and Goddard Space Flight Center to obtain additional information on the status of the project with regard to progress toward baseline commitments. We periodically attended flight program reviews at NASA headquarters where the current status of the program was briefed to NASA headquarters officials and members of the Standing Review Board. We also interviewed JWST project and contractor officials from the Jet Propulsion Laboratory and Northrop Grumman Aerospace Systems to determine the extent to which oversight was being conducted. In addition, we interviewed officials from the Defense Contract Management Agency to obtain information on oversight activities delegated to it by the JWST project. To assess the technological challenges and risks facing the project, we reviewed project monthly status reviews, information from the project’s risk database, as well as briefings and schedule documentation provided by project and contractor officials. These documents included information on the project’s technological challenges and risks, mitigation plans, and timelines for addressing these risks and challenges. We also interviewed program and project officials for each major observatory system to clarify information and to obtain additional information on system and subsystem level risks and technological challenges for each subsystem. Further, we interviewed officials from the Jet Propulsion Laboratory and Northrop Grumman Aerospace Systems concerning risks and challenges on the subsystems, instruments, or components they were developing. We reviewed GAO’s prior work on NASA Large Scale Acquisitions; the Goddard Space Flight Center Rules for the Design, Development, Verification, and Operation of Flight Systems technical standards; and NASA’s Space Flight Program and Project Management Requirements and Systems Engineering Processes and Requirements policy documents. We compared Goddard standards with data reported by the project to assess the extent to which the JWST project followed NASA policies. To assess the extent to which cost risks exist that may threaten the project’s ability to execute the project as planned, we reviewed project and contractor documentation and held interviews with project and contractor officials. We reviewed project monthly status reviews and NASA headquarters flight program reviews, contractor information on the potential cost to address identified risks, and project analysis of budget- related risks to include the project’s cost reserve posture and the impact of sequestration. We interviewed program and project officials at NASA headquarters and Goddard Space Flight Center as well as officials from the Jet Propulsion Laboratory and Northrop Grumman Aerospace Systems to obtain information on risks to maintaining cost targets and plans to mitigate those risks. To assess the extent to which the JWST project schedule is reliable, we used GAO’s Schedule Assessment Guide to assess characteristics of three selected subordinate schedules—the Integrated Science Instrument Module (ISIM), Optical Telescope Element (OTE), and cryocooler—that are used as inputs to the integrated master schedule. We selected the three schedules above as they reflect a significant portion of the work being conducted within NASA (ISIM), at the contractor level (OTE), and the subcontractor level (cryocooler) during the course of our work. We also analyzed schedule metrics as a part of that analysis to highlight potential areas of strengths and weaknesses against each of our 4 characteristics of a reliable schedule. In order to assess each schedule against the 4 characteristics and their accompanying 10 best practices, we traced and verified underlying support and determined whether the program office or contractor provided sufficient evidence to satisfy the criterion and assigned a score depicting that the practices were not met, minimally met, partially met, substantially met, or fully met. By examining the schedules against our guidance, we conducted a reliability assessment on each of the schedules and incorporated our findings on reliability limitations in the analysis of each subordinate schedule. We also conducted interviews with project and contractor management and schedulers before our analysis was completed and analyzed project and contractor documentation concerning scheduling policies and practices. After conducting our initial analysis, we shared it with the relevant parties to provide an opportunity for them to comment and identify reasons for observed shortfalls in schedule management best practices. We took their comments and any additional information they provided and incorporated it into the assessments to finalize the scores for each characteristic and best practice. We were also able to use the results of the three subordinate schedules to provide insight into the health of the integrated master schedule since the same strengths and weaknesses of the subordinate schedules would transfer to the master schedule. We determined that the schedules were sufficiently reliable for our reporting purposes and our report notes the instances where reliability concerns affect the quality of the schedules. Our work was performed primarily at NASA headquarters in Washington, D.C. and Goddard Space Flight Center in Greenbelt, Maryland. We also visited the Jet Propulsion Laboratory in Pasadena, California; Northrop Grumman Aerospace Systems in Redondo Beach, California; and the Defense Contract Management Agency in Redondo Beach, California. We conducted this performance audit from February 2013 to January 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Shelby S. Oakley, Assistant Director; Karen Richey, Assistant Director; Patrick Breiding; Richard A. Cederholm; Laura Greifner; Keith Hornbacher; David T. Hulett; Jason Lee; Sylvia Schatz; Ryan Stott; and Roxanna T. Sun made key contributions to this report.
JWST is one of NASA's most complex and costly science projects. Effective execution of the project is critical given the potential effect further cost increases could have on NASA's science portfolio. The project was rebaselined in 2011 with a 78 percent life-cycle cost estimate increase --now $8.8 billion--and a launch delay of 52 months--now October 2018. GAO has made a number of prior recommendations, including that the project perform an updated cost and schedule risk analysis to improve cost estimates. GAO was mandated to assess the program annually and report on its progress. This is the second such report. This report assesses the (1) extent to which the JWST project is meeting its cost and schedule commitments and maintaining oversight, (2) current major technological challenges facing the project, (3) extent to which cost risks exist that may threaten the project's ability to execute as planned, and (4) extent to which the JWST project schedule is reliable based on scheduling best practices. GAO reviewed relevant NASA and contractor documents, interviewed NASA and contractor officials, and compared the project schedule with best practices criteria. The James Webb Space Telescope (JWST) project is generally executing to its September 2011 revised cost and schedule baseline; however, several challenges remain that could affect continued progress. The National Aeronautics and Space Administration (NASA) has requested funding that is in line with the rebaseline and the project is maintaining 14 months of schedule reserve prior to its launch date. Performance data from the prime contractor indicate that generally work is being accomplished on schedule and at the cost expected; however, monthly performance declined in fiscal year 2013. Project officials have maintained and enhanced project oversight by, for example, continuing quarterly NASA and contractor management meetings and instituting a tool to update cost estimates for internal efforts. Program officials, however, are not planning to perform an updated integrated cost/schedule risk analysis, as GAO recommended in 2012, stating that the project performs monthly integrated risk analyses they believe are adequate. Updating the more comprehensive analysis with a more refined schedule and current risks, however, would provide management and stakeholders with better information to gauge progress. The JWST project has made progress addressing some technical challenges that GAO reported in 2012, such as inadequate spacecraft mass margin, but others have persisted, causing subsystem development delays and cost increases. For example, the development and delivery schedule of the cryocooler--which cools one instrument--was deemed unattainable by the subcontractor due to technical issues and its contract was modified in August 2013 for the second time in less than 2 years, leading to a cumulative 120 percent increase in contract costs. While recent modifications have been made, execution of the cryocooler remains a concern given that technical performance and schedule issues persist. Overall the project is maintaining a significant amount of cost reserves; however, low levels of near-term cost reserves could limit its ability to continue to meet future cost and schedule commitments. Development challenges have required the project to allocate a significant portion of cost reserves in fiscal year 2014. Adequate cost reserves for the prime contractor are also a concern in fiscal years 2014 and 2015 given the rate at which these cost reserves are being used. Limited reserves could require work to be extended or work to address project risks to be deferred--a contributing factor to the project's prior performance issues. Potential sequestration and funding challenges on other major NASA projects could limit the project's ability to address near-term challenges. GAO's analysis of three subsystem schedules determined that the reliability of the project's integrated master schedule--which is dependent on the reliability of JWST's subsystem schedules--is questionable. GAO's analysis found that the Optical Telescope Element (OTE) schedule was unreliable because it could not adequately identify a critical path--the earliest completion date or minimum duration it will take to complete all project activities, which informs officials of the effects that a slip of one activity may have on other activities. In addition, reliable schedule risk analyses of the OTE, the cryocooler, or the Integrated Science Instrument Module schedules were not performed. A schedule risk analysis is a best practice that gives confidence that estimates are credible based on known risks so the schedule can be relied upon to track progress. Congress should consider directing NASA to perform an updated integrated cost/schedule risk analysis. GAO recommends that NASA address issues related to low cost reserves and perform schedule risk analyses on the three subsystem schedules GAO reviewed. NASA concurred with GAO's recommendations.
You are an expert at summarizing long articles. Proceed to summarize the following text: The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (P.L. 104-193) replaced the entitlement program, Aid to Families with Dependent Children (AFDC), with Temporary Assistance for Needy Families (TANF). TANF provides $16.5 billion annually to the states in the form of block grants through 2002. Under TANF, recipients are required to work and can receive federal cash assistance for only a limited period of time. TANF’s requirements vary from state to state because the 1996 act gave the states more control over the design of their own programs. While TANF is generally administered at the state level, the Department of Health and Human Services (HHS) is the primary federal agency providing oversight of states’ welfare programs. Through its public housing and Section 8 programs, HUD provides housing assistance to about 4.3 million low-income households. In fiscal year 1997, HUD’s outlays for Section 8 subsidies and for public housing modernization, development, and operating subsidies amounted to about $22.6 billion. About $7.5 billion of this amount was allocated through the tenant-based Section 8 certificate and voucher programs, under which housing agencies provide rent subsidies to private landlords. About $7.9 billion went directly to private landlords as part of the project-based Section 8 program, and about $7.2 billion went for the modernization, development, and operation of Public and Indian Housing. Included in this latter amount is $2.6 billion in appropriations that was distributed through HUD’s formula-based performance funding system to state, county, and local housing agencies for the operation of public housing. In 1996, approximately a quarter of the households receiving HUD subsidies also received cash assistance. In general, families receiving housing assistance are required to pay 30 percent of their cash income (adjusted for certain items, such as child care and medical expenses) in rent, while HUD provides subsidies to housing agencies and private landlords to make up the difference between tenants’ rental payments and the cost of operating public housing units or the rents charged by the landlords. Because rental payments are linked to household income, rental revenues will fall if families receiving assistance are unable to replace lost welfare benefits with wage income, and additional HUD subsidies will then be needed. But if assisted families’ employment and earnings increase under welfare reform, then the amount of the required rental payments may rise, reducing the need for subsidies. For subsidy needs to decline, residents would have to earn more than they formerly received in cash assistance, and working residents would need to either remain in public and assisted housing after gaining employment or be replaced with employed residents. These conditions are less likely to hold in areas where cash benefit levels are high and nonsubsidized housing is available and affordable. HUD has established policies that can influence the impact of welfare reform on housing programs. For example, housing agencies and HUD can set minimum rents of up to $50 for tenants who live in public housing or have Section 8 certificates or vouchers, while owners of project-based Section 8 properties are required to charge minimum rents of $25. In addition, recent legislation has expanded housing agencies’ authority to exclude some wage earnings from rental payment calculations in an effort to retain working families in public housing units. Similarly, while housing agencies and subsidized private landlords were formerly required to give preference in admission to very poor families, they are now allowed to give some preference to working families with wage income. In addition to these rent and admission policies, HUD provides programs, some of which originated in the mid-1980s, to deliver employment-related services to the tenants of public and assisted housing. These programs have provided job training, counseling, and placement services; child care; and transportation. We identified five studies estimating welfare reform’s financial impact on housing programs nationally, one estimating the impact for eight housing agencies, and another seven estimating the impact for a single housing agency. While some of the studies suggest that welfare reform will likely cause only modest changes in the amounts of the HUD subsidies needed, some of the studies indicate more substantial effects. For example, one national study indicated that HUD would need to increase its annual subsidies to housing agencies by almost 42 percent to offset expected decreases in public housing rents, while another study indicated that HUD could decrease its annual subsidies to a particular housing agency by almost 20 percent. Differences in the studies’ focus and assumptions help explain the widely varying estimates. While some researchers focused on a single feature of a state’s welfare reform plan, others examined the national impact of a broad range of state plans; while some studies used “worst-case” assumptions about the employment and earnings prospects of welfare recipients, others used “best-guess” assumptions. Moreover, because certain welfare and housing policies have changed since the estimates were developed, the economy has been stronger than anticipated, and the effects of welfare reform on welfare recipients’ behavior are difficult to predict, some of the authors of the studies we reviewed expressed uncertainty about their estimates. Five studies we identified estimated the financial impact of welfare reform nationally (see table 1). Three of these five studies suggest that welfare reform will have a relatively modest effect on the need for HUD subsidies, ranging from a 0.4-percent annual decrease to a 3.3-percent increase in the amount needed. The two remaining studies anticipate a greater effect. For example, the Council of Large Public Housing Authorities indicated, in the fall of 1996, that the annual amount needed for HUD subsidies could increase by 19 percent. Of the other eight studies we reviewed, seven estimated welfare reform’s impact on individual housing agencies in different parts of the country, and one, by HUD’s Office of Policy Development and Research, covered eight individual housing agencies. The estimates for these eight studies, which are summarized in table 2, varied widely, both from one housing agency to another and from one scenario to another for a single housing agency. In particular, under assumptions that the authors characterize as unlikely—that the state would adopt a harsh welfare reform plan and the housing authority would not provide employment assistance to affected residents—the Seattle housing authority’s findings indicate that the agency could need an annual increase of as much as 37 percent of its fiscal year 1997 HUD subsidy to offset welfare reform’s impact. Conversely, using optimistic assumptions, HUD predicted that rental revenues at the Dallas housing authority could rise by enough to warrant as much as a 20-percent reduction in the amount of the HUD subsidy needed. In addition to differences in geographic scope, the studies we reviewed differed in other key aspects of their focus, and these differences often dictated the assumptions used in the studies. Some of these studies took a worst-case approach to welfare reform, imposing very conservative assumptions about the employment and earnings prospects of welfare recipients with housing assistance. Generally, these analyses were designed to heighten the awareness of the welfare and housing assistance communities to the worst possible implications of some aspects of welfare reform and to prompt these communities to take appropriate action. For example, at the national level, the Council of Large Public Housing Authorities used a worst-case approach in the summer and early fall of 1996 to look at what would happen if all residents receiving cash assistance lost that assistance 5 years after the implementation of welfare reform (the federally mandated time limit) and if none of the affected families were able to replace any of these benefits with wage earnings. This analysis, which was designed to motivate the public housing community to take action, estimated that required rental payments could fall by 30 percent (the percentage of income that residents generally pay in rent) of the total amount of cash assistance lost. At the local level, the Minneapolis and St. Paul housing authorities designed studies to show the maximum potential effect of Minnesota’s decision to reduce by $100 the monthly cash benefits for TANF recipients with housing assistance. To estimate the maximum impact of the $100 reduction on the housing agency and HUD, these studies ignored any possibility that residents receiving TANF benefits might have additional earnings to offset some of the $100 loss in benefits. Thus, the monthly rental payments of subsidized households would be $30 ($100 times 30 percent) less than they otherwise would have been. Under this scenario, the rental payments of public housing residents and Section 8 certificate and voucher holders would decline annually by $817,000 for Minneapolis and over $1 million for St. Paul. Other studies, designed to forecast HUD’s actual budget needs, attempted to make best-guess estimates of the financial impact of welfare reform on HUD and housing agencies. HUD’s national study and one by the Congressional Budget Office (CBO) used more elaborate methods to predict the employment and earnings prospects of welfare recipients. For example, relying on various studies of the earnings of former welfare recipients, CBO assumed that households with Section 8 assistance would be able to replace two-thirds of their lost welfare benefits with earnings when time limits take effect. These studies generally found that welfare reform would have a more modest impact than studies designed to look at the worst-case outcomes of imposing time limits. Still other studies, including the Johns Hopkins study, HUD’s multisite study, and the Los Angeles study, were designed to determine a range of likely effects of welfare reform on HUD and housing agencies and to identify the factors that might influence the extent of these effects. Most of these studies focused on the potential impact of a broad range of factors—including welfare policies (e.g., time limits and employment sanctions), housing policies (e.g., minimum rents and exclusions of earned income from rental payment calculations), and local and national economic conditions—to determine which factors would have the largest impact on the subsidies needed. The studies used varying assumptions and models to estimate welfare reform’s impact under different scenarios. For example, HUD’s multisite study and the Los Angeles study estimated the impact of welfare reform under both optimistic and conservative assumptions about the employment and earnings prospects of welfare recipients with housing assistance. Additionally, both HUD’s multisite study and the Johns Hopkins study varied their assumptions about welfare reform’s rules by examining the potential effects of different state welfare programs. Some of the studies discussed above focused on the impact of certain housing policies. For example, HUD’s multisite study found that because rental payments do not fall to zero when tenants lose their cash income but are required to pay minimum rents, the imposition of such minimum rents could offset much of the potential decline in rental revenues resulting from welfare reform. The Los Angeles study attempted to measure how much of the potential increases in rental payments the housing agencies would forgo because of policies excluding new income from rental payment calculations. Finally, studies designed by David Griffiths & Associates, Ltd. (DMG), focused on the impact of significant involvement by housing agency managers in helping tenants move into the labor market. DMG assumed, in its studies for both the District of Columbia and the Public Housing Authorities Directors Association, that a high level of involvement by housing agency managers would significantly improve the income prospects of welfare recipients. The varied assumptions underlying the studies we reviewed reflect researchers’ uncertainties about changes in welfare and housing policies over time, the future of the economy, and the behavioral responses of welfare recipients. The authors of several of the studies described their estimates as outdated because events (such as the final version of a state’s welfare reform law or the condition of the national economy) had not played out as the authors had anticipated when they conducted their studies. For example, the representative of DMG who developed the estimates for the Public Housing Authorities Directors Association and the District of Columbia told us that if he were developing the estimates today, he would revise the results of his pessimistic scenario significantly to take account of (1) modifications to the welfare reform law that have reduced the cuts in Supplemental Security Income he initially anticipated, (2) the significant emphasis HUD has placed on programs to help move people from welfare to work, and (3) the surprisingly strong economy. Similarly, the authors of HUD’s multisite study told us that their estimates for Los Angeles are probably too pessimistic because they assumed a more restrictive welfare program than the one California actually adopted in August 1997. Other authors were also concerned about the general difficulty of predicting the future behavior of TANF recipients. For example, officials at CBO stated that because of uncertainties about how welfare reform would be implemented and how recipients would respond, they recognized that their estimates could be substantially different from actual outcomes. And, as we reported in January 1998, HUD is no longer standing behind its initial assessment of welfare reform’s nationwide impact, in part, because of difficulties it identified in predicting how states will implement welfare reform plans and how welfare recipients will respond to welfare reform. The experts with whom we spoke generally agree that several methodological and data issues complicate efforts to forecast welfare reform’s financial impact on HUD’s housing subsidy programs. Some issues, such as differences in state welfare policies and plans for implementing welfare reform and uncertainty about the strength of the economy and the behavior of welfare recipients, make it difficult to predict the impact of welfare reform itself. Housing researchers generally agree, however, that estimating welfare reform’s financial impact on housing programs is more complex than estimating welfare reform’s impact overall because the characteristics of welfare recipients with housing assistance may be different from those of other welfare recipients, and housing agencies and landlords may adopt a broad range of housing philosophies and policies. Finally, the lack of consistent and reliable data further hampers researchers’ efforts to predict welfare reform’s financial impact on HUD’s housing programs with any certainty. Differences in state welfare policies have always been important in evaluating the federal welfare program. Under AFDC, states paid different levels of benefits to entitled recipients, and HHS researchers reported that recipients were more likely to leave the welfare rolls in states with lower benefits than in states with higher benefits. Because welfare reform gave the states greater discretion in setting welfare policy, state policies now differ across a multitude of dimensions. For example, the states can now determine who is eligible for cash assistance and for how long. In addition, they can set specific work requirements and establish sanctions for recipients who violate their state welfare policies. Differences in the implementation of welfare plans at the state and local levels may exacerbate interstate differences in the impact of welfare reform. In particular, the manner in which caseworkers relay information to and interact with recipients affects outcomes under welfare reform. For example, in evaluating welfare reform in several states, the Manpower Demonstration Research Corporation (MDRC) found that differences in what caseworkers told clients in Florida and Minnesota help to explain differences in the timing of caseload reductions in those states. MDRC found that in Florida, where recipients could receive cash benefits for a maximum of 2 years, recipients tended to exhaust their benefits before getting jobs and therefore caseloads did not decline quickly, while in Minnesota, where recipients could receive federal cash benefits for 5 years, caseloads dropped quickly. MDRC told us that this difference in behavior seemed to occur, at least in part, because Florida caseworkers encouraged recipients to remain on TANF and spend their 2 years investing in job skills, while Minnesota caseworkers advised their clients to become employed as soon as possible and save their limited benefits for possible future needs. The studies we identified varied widely in the assumptions they used to predict welfare recipients’ potential employment prospects and earnings. Experts with whom we spoke generally agree that welfare recipients’ employment prospects and earnings depend on the market for low-skilled workers. The demand for these workers generally depends on the overall health of the national and local economy, while the supply depends on recipients’ responses to the level of wages they could earn and the level of welfare benefits they could receive. In general, some issues make it difficult to predict the demand for low-skilled workers. First, because welfare reform has thus far occurred during a period of strong national job growth, researchers have little sense of how the demand for low-skilled labor will hold up during an economic slowdown. For example, experts have been unable to agree on how much of the recent decline in the number of families receiving cash assistance is the result of the very robust economy in recent years and how much is the result of welfare reform. In order to shed light on the degree to which economic conditions affect the impact of welfare reform, HUD researchers, in their multisite analysis, studied at least two cities with different economic conditions in each of three states. Welfare recipients in the same state were generally subject to the same welfare laws. While HUD found that welfare recipients in all three states had more success in entering labor markets in cities with more robust local economies, the difference was not consistent across the states. Second, some researchers have noted that while it may be possible to measure the number of low-skilled jobs available at a given point in time, this type of analysis will not necessarily reveal how many people can find employment over a period of time because of constant turnover in employment and other changes in labor market conditions over time. The supply of low-skilled workers will depend, in part, on how welfare recipients respond to changes in their state’s welfare program. While some researchers believe that past studies of the impact of changing wages and benefit levels on welfare recipients’ desire to work could help answer this question, other researchers believe that the 1996 welfare reforms constitute such a dramatic shift from earlier welfare policies that past behavior may not be a good predictor of future behavior. Thus, there is little consensus among experts about the future behavior of welfare recipients. Housing experts with whom we spoke generally agree that estimates of the effect of welfare reform on the general welfare population may not apply to the subset of welfare recipients in public and assisted housing. As we reported in June 1998, welfare recipients living in public housing are more likely to have been on welfare longer than those without housing assistance, and longer spells on welfare have been associated with less success in obtaining employment. In addition, experts suggest that welfare recipients with housing assistance may be less likely to go to work for several reasons: Welfare recipients with housing assistance will be able to retain a smaller portion of their new earnings because they will generally be required to pay 30 percent of those earnings in rent. Because housing assistance provides a “cushion,” welfare recipients with public or assisted housing may have less incentive to work than other welfare recipients with the same job prospects but no housing assistance. Recent evidence suggests that job growth is occurring in the suburbs while welfare recipients are likely to live in urban centers or rural areas. In particular, welfare recipients with project-based housing assistance (including both public housing and project-based Section 8 assistance) may face higher relocation costs than other welfare recipients because they may have to give up their housing assistance to move to locations with better job prospects. The combination of longer periods on welfare and less incentive to work may help explain why some researchers have found that welfare recipients with housing assistance are less successful in moving from government-sponsored job training programs into long-term private sector employment. A recent study by MDRC researchers in Atlanta showed that of the participants in certain federal job training programs, those with no housing assistance were most likely, those with certificates and vouchers slightly less likely, and those in public housing least likely, to find employment after completing their training. Because of recent housing policy changes and uncertainty about the degree to which housing agencies will adopt these changes, researchers will have difficulty separating the effects of welfare reform from those of changes in housing policy, just as they have had difficulty separating the effects of welfare reform from those of overall economic conditions. Many of the studies we reviewed, as well as experts we consulted, recognized the importance of recent changes in rent and admission policies and programs to move welfare recipients to work. For example, the director of the housing authority in Athens, Georgia, told us that the types of admission preferences, the effect of management’s involvement in helping tenants obtain employment, and the level of minimum rents could determine whether his agency’s rental revenues rise or fall with welfare reform. However, recent legislation may limit the degree to which housing agencies will be able to use minimum rents to offset potential declines in rental payments under welfare reform. Welfare and housing researchers have used a combination of government administrative data—data collected by federal, state, or local officials on a host of factors associated with the recipients of welfare and housing assistance—and survey data to study the behavior of those who receive welfare and housing assistance. Administrative databases generally provide information over time on the participants in a program, while survey data generally conform more closely to research objectives but cover a smaller number of households. Because the states have more flexibility to design their own systems under welfare reform, data elements in administrative and survey databases may be less consistently defined than in the past. Although state welfare agencies have reported administrative data under HHS’ emergency rules, which were phased in over a period of 9 months beginning in July 1997, some states have submitted data that are not fully consistent with HHS’ specifications. HHS will be issuing final TANF reporting rules that better define terms, but according to an HHS official, the states will continue to have significant flexibility in how they define their programs, making data assessment more difficult. Similarly, according to an official in the Census Bureau’s Housing and Household Economic Statistics Division, the increased interstate variation promoted by the 1996 welfare reforms has placed a significant burden on national organizations that collect survey data. For example, obtaining consistent data across states about the level of cash benefits may be difficult because the states have given their TANF benefits a variety of names. For example, Minnesota calls TANF the Minnesota Family Investment Plan, while California refers to TANF as CalWORKS. In addition, questioners and respondents may not know how to classify the increasingly common “one-time diversion” payments, which states use to provide one-time assistance to families in lieu of placing them on the welfare rolls. We and others have questioned the reliability of the existing national administrative and survey data on the residents of public and assisted housing. HUD collects administrative data on the residents of public and tenant-based assisted housing in its Multifamily Tenant Characteristics System database. Housing agencies are supposed to provide information for this database to HUD electronically in a specified format, but some agencies, especially the larger ones, do not report data for all of their residents, and the data contain errors as well. A HUD official told us that in recent years, HUD has concentrated on improving the response rate for these data, but the greater response has been accompanied by an increase in the number of data entry errors. HUD collects similar data on the residents of properties with project-based Section 8 assistance in the Tenant Rental Assistance Certification System database. According to HUD officials, this database suffers, on a smaller scale, from reporting problems and data errors such as those affecting the multifamily database. The reliability of survey data on housing assistance is also questionable because respondents to surveys with questions about this assistance often misreport their status. HUD has documented probable misreporting in the Current Population Survey and the American Housing Survey. For example, in a paper presented in May 1996, HUD economists reported that the majority of those receiving housing assistance who said they lived in public housing actually do not. Furthermore, they reported that the majority of the families receiving other housing assistance do not accurately identify the way they are assisted, and perhaps one-fifth of those who report receiving a housing subsidy do not actually receive one. In addition, the interim director of the Johns Hopkins University’s Center for Policy Studies has noted similar reporting discrepancies in survey responses and administrative data and has suggested methods for improving the reliability of the survey responses. Although the Census Bureau and others are developing and testing questions to improve survey responses on welfare benefits and housing subsidies and HUD has worked to improve its data as well, it is still too early to obtain adequate data to test assumptions about the outcomes of recent welfare and housing reforms. We provided a draft of this report to HUD for review and comment. HUD stated that the report is fair and straightforward and provided some technical corrections. HUD’s comments appear in appendix III. In addition, we provided excerpts of a draft of this report to the authors of each of the studies we reviewed. Several of the researchers and housing agencies provided us with comments that we incorporated as appropriate. To identify studies that estimated welfare reform’s financial impact on housing assistance programs, we contacted known experts and officials from a variety of organizations and government agencies. In particular, we spoke with experts in housing and welfare research, representatives of major trade associations and advocacy groups, officials at 30 of the largest local housing authorities and 10 of the largest state housing agencies, and officials from 10 private management companies of various sizes that are managing properties with Section 8 subsidies. Although we attempted to identify as many studies as possible, we recognize that the 13 studies we identified (see app. I) may not include all such studies that were performed. It should also be noted that it was beyond the scope of this review to assess the quality of the research underlying the individual estimates we reviewed. To consistently present the different studies’ estimates of dollar changes in rental revenues, costs, and subsidies, we presented each study’s findings as the percentage change in the subsidy relative to the total performance fund and/or housing assistance payments HUD says the agency received in 1997. We ignored the facts that HUD does not always provide 100 percent of the subsidy needed for public housing, as calculated under the performance funding system formula, and that HUD’s outlays may lag behind changes in rental revenues by 2 to 3 years. In addition, although the results presented here are based on the assumption that HUD will provide 100 percent of the needed subsidy, the studies we reviewed made different assumptions about the percentage of the needed subsidy that HUD would be likely to provide. These assumptions ranged from 85 percent to 100 percent. To the extent that the subsidy is funded at less than 100 percent, more of the cost of welfare reform will be borne by local housing agencies and less will be borne by HUD. See appendix I for a list of the individual studies we reviewed. To better understand the methodological and data issues that arise when estimating welfare reform’s financial impact on HUD’s housing programs, we also contacted known experts in welfare and housing who represented a broad range of views. We questioned them about the importance of specific methodological and data concerns using a semistructured questionnaire. We also gathered their research and analyzed the information collected from the interviews and research documents to develop common themes. Appendix II lists the experts with whom we spoke about methodological and data issues. We conducted our work from May 1998 through November 1998 in accordance with generally accepted government auditing standards. If you or your staff have any questions about this report, please contact me at (202) 512-7631. Major contributors to this report were Amy Abramowitz, Nancy Barry, DuEwa Kamara, and Lara Landeck. “Background Materials for Baseline Projections of Spending Under Current Law.” Congressional Budget Office, Washington, D.C.: Mar. 1998. “The Fiscal Impacts of Welfare Reform: An Early Assessment.” Council of Large Public Housing Authorities. Issue brief. Fall 1996. “Impact of Welfare Reform in Public Housing.” David M. Griffith & Associates, Ltd. Sponsored by the Public Housing Authorities Directors Association. Unpublished. Spring 1997. Newman, Sandra and Joseph Harkness. “The Effects of Welfare Reform on Housing: A National Analysis.” Presented at the Policy Research Roundtable on the Implications of Welfare Reform for Housing, sponsored by the Fannie Mae Foundation in collaboration with the Institute for Policy Studies at The Johns Hopkins University (work in progress). July 22, 1997. For updated information, see “The Effects of Welfare Reform on Housing: A National Analysis” in Newman, Sandra J. (ed.) The Home Front: Implications of Welfare Reform for Housing Policy, Washington, D.C.: The Urban Institute Press, forthcoming. “Technical Paper: Welfare Reform Budgeting.” U.S. Department of Housing and Urban Development (HUD). Washington, D.C.: Oct. 4, 1996. “Estimated PHA Income Loss Due to Proposed AFDC Cuts.” St. Paul Public Housing Agency, Unpublished. Feb. 28, 1997. “Impact of Welfare Reform on Program Costs.” New Jersey Department of Community Affairs’ Section 8 Housing Program. Unpublished. Jan. 1998. Nguyen, Mai, Charles Kastner, and Ashley Lommers-Johnson. “Welfare Reform: Status of Washington State’s Welfare Reform Plan; Effects on Residents, the Seattle Housing Authority, and Neighborhoods; and Prospects for Employment and Rent Income.” Presented at HUD headquarters, Washington, D.C. Dec. 17, 1996. “Potential Impact of CalWORKS.” Housing Authority of the City of Los Angeles. Unpublished. Nov. 1997. “Welfare Act Impact Analysis: District of Columbia Housing Authority (DCHA)—Final Report.” David M. Griffith & Associates, Ltd. Apr. 1997. “Welfare Reform Impact Study.” Minneapolis Public Housing Authority. Unpublished. Spring 1997. Welfare Reform Impacts on the Public Housing Program: A Preliminary Forecast. U.S. Department of Housing and Urban Development. Office of Policy Studies and Research. Rockville, MD., Mar. 1998. “Welfare Reform Program and Financial Analysis.” Miami-Dade Housing Agency. Unpublished. Oct. 1997. Paul Cullinan, Chief, Human Resources Cost Estimate Unit, Congressional Budget Office (CBO) Shelia Dacey, Analyst, CBO Debra Devine, Social Science Analyst, Office of Policy Development and Research, U. S. Department of Housing and Urban Development (HUD) Katherine L. Meredith, Program Examiner, Housing Branch, Executive Office of the President, Office of Management and Budget (OMB) Charles Nelson, Assistant Division Chief for Income and Poverty, Housing and Economic Household Statistics Division, Bureau of the Census, U. S. Department of Commerce Don Oellerich, Acting Deputy Chief Economist, Office of the Assistant Secretary of Planning and Evaluation, U. S. Department of Health and Human Services (HHS) Jim Brigle, Director of Government Affairs, Public Housing Authorities Directors Association George C. Caruso, Acting Executive Director, National Affordable Housing Mangement Association Connie Campos, Policy Analyst for Housing, National Association of Housing and Redevelopment Officials Major Galloway, Policy Analyst for Housing, National Association of Housing and Redevelopment Officials Debra Gross, Research Director, Council of Large Public Housing Authorities John Hiscox, Executive Director, Macon Housing Authority Walter Huelsman, Vice President and National Director of Housing Consulting, DMG-Maximus, Inc. 1. After additional discussions with HUD to clarify the information provided in comment 1, we included the data from HUD’s suggested paragraph with certain appropriate modifications. 2. We deleted the footnote as suggested. 3. We changed the reference as requested. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on welfare reform's financial impact on the Department of Housing and Urban Development's (HUD) budget, focusing on what: (1) studies have been done on welfare reform's financial impact on public and assisted housing; and (2) methodological and data issues, if any, arise when researchers estimate welfare reform's financial impact on low-income housing. GAO noted that: (1) officials at housing agencies and researchers at government agencies, universities, trade associations, and a consulting firm have estimated welfare reform's financial impact on some components of HUD's housing subsidy programs; (2) GAO identified 13 studies that estimated this impact; (3) these studies of welfare reform's financial impact on HUD's housing subsidy programs varied in their geographic scope, focus and assumptions, methods, and findings; (4) some studies also estimated welfare reform's impact under alternative scenarios and therefore developed a range of estimates of welfare reform's cost for HUD and housing agencies; (5) the estimates in the studies GAO reviewed generally varied with the issues on which they focused and the assumptions on which they were based; (6) some of the authors of the studies GAO reviewed told it that their estimates might not hold up over time because some federal and state welfare laws have changed since the estimates were first developed and the economy has been more robust than anticipated; (7) experts with whom GAO spoke generally agree that several issues complicate efforts to forecast welfare reform's financial impact on HUD's housing subsidy programs; (8) these issues include not only those encountered in predicting welfare reform's impact on the recipients and providers of public assistance, but also those specific to estimating welfare reform's financial impact on the residents of assisted housing, providers of subsidized housing, and HUD; (9) in general, wide variations in state welfare plans and their implementation complicate the estimation of welfare reform's impact; (10) the employment and wage prospects for welfare recipients depend, in part, on future local and national economic health and on recipients' behavior; (11) housing experts generally agree that estimating welfare reform's impact on housing programs is more complex than estimating welfare reform's impact overall because of possible differences in the behavior of welfare recipients with and without housing assistance, as well as variations in policies adopted by housing agencies and landlords; and (12) a lack of reliable data further hampers researchers' efforts to predict welfare reform's financial impact on HUD's housing programs.
You are an expert at summarizing long articles. Proceed to summarize the following text: Wildland fires are both natural and inevitable and play an important ecological role on the nation’s landscapes. These fires have long shaped the composition of forests and grasslands, periodically reduced vegetation densities, and stimulated seedling regeneration and growth in some species. Wildland fires can be ignited by lightning or by humans either accidentally or intentionally. As we have described in previous reports, however, various land use and management practices over the past century—including fire suppression, grazing, and timber harvesting—have reduced the normal frequency of fires in many forest and rangeland ecosystems. These practices contributed to abnormally dense, continuous accumulations of vegetation, which in turn can fuel uncharacteristically severe wildland fires in certain ecosystems. According to scientific reports, several other factors have contributed to overall changes to ecosystems and the landscapes on which they depend, altering natural fire regimes and contributing to an increased frequency or intensity of wildland fire in some areas. For example, the introduction and spread of highly flammable invasive nonnative grasses, such as cheatgrass, along with the expanded range of certain flammable native species, such as western juniper, in the Great Basin region of the western United States—including portions of California, Idaho, Nevada, Oregon, and Utah— have increased the frequency and intensity of fire in the sagebrush steppe ecosystem. Changing climate conditions, including drier conditions in certain parts of the country, have increased the length and severity of wildfire seasons, according to many scientists and researchers. For example, in the western United States, the average number of days in the fire season has increased from approximately 200 in 1980 to approximately 300 in 2013, according to the 2014 Quadrennial Fire Review. In Texas and Oklahoma this increase was even greater, with the average fire season increasing from fewer than 100 days to more than 300 during this time. According to the U.S. Global Change Research Program’s 2014 National Climate Assessment, projected climate changes suggest that western forests in the United States will be increasingly affected by large and intense fires that occur more frequently. Figure 1 shows the wildfire hazard potential across the country as of 2014. In addition, development in the wildland-urban interface (WUI) has continued to increase over the last several decades, increasing wildland fire’s risk to life and property. According to the 2014 Quadrennial Fire Review, 60 percent of new homes built in the United States since 1990 were built in the WUI, and the WUI includes 46 million single-family homes and an estimated population of more than 120 million. In addition to increased residential development, other types of infrastructure are located in the WUI, including power lines, campgrounds and other recreational facilities, communication towers, oil and gas wells, and roads. Some states, such as New Mexico and Wyoming, have experienced significant increases in oil and gas development over the past decade, adding to the infrastructure agencies may need to protect. Under the National Forest Management Act and the Federal Land Policy and Management Act of 1976, respectively, the Forest Service and BLM manage their lands for multiple uses such as protection of fish and wildlife habitat, forage for livestock, recreation, timber harvesting, and energy production. FWS and NPS manage federal lands under legislation that primarily calls for conservation; management for activities such as harvesting timber for commercial use is generally precluded. BIA is responsible for the administration and management of lands held in trust by the United States for Indian tribes, individuals, and Alaska Natives. These five agencies manage about 700 million surface acres of land in the United States, including national forests and grasslands, national wildlife refuges, national parks, and Indian reservations. The Forest Service and BLM manage the majority of these lands. The Forest Service manages about 190 million acres; BLM manages about 250 million acres; and BIA, FWS, and NPS manage 55, 89, and 80 million acres, respectively. Figure 2 shows the lands managed by each of these five agencies. Severe wildland fires and the vegetation that fuels them may cross the administrative boundaries of the individual federal land management agencies or the boundaries between federal and nonfederal lands. State forestry agencies and other entities—including tribal, county, city, and rural fire departments—share responsibility for protecting homes and other private structures and have primary responsibility for managing wildland fires on nonfederal lands. Most of the increased development in the WUI occurs on nonfederal lands, and approximately 70,000 communities nationwide are considered to be at high risk from wildland fire. Some of these communities have attempted to reduce risk of wildland fire through programs aimed at improving fire risk awareness and promoting steps to reduce their risk, such as the Firewise Communities program. Wildland fire management consists of three primary components: preparedness, suppression, and fuel reduction. Preparedness. To prepare for a wildland fire season, the five land management agencies acquire firefighting assets—including firefighters, fire engines, aircraft, and other equipment—and station them either at individual federal land management units or at centralized dispatch locations in advance of expected wildland fire activity. The primary purpose of acquiring these assets is to respond to fires before they become large—a response referred to as initial attack. The agencies fund the assets used for initial attack primarily from their wildland fire preparedness accounts. Suppression. When a fire starts, interagency policy calls for the agencies to consider land management objectives—identified in land and fire management plans developed by each land management unit—and the structures and resources at risk when determining whether or how to suppress the fire. A wide spectrum of strategies is available to choose from, and the land manager at the affected local unit is responsible for determining which strategy to use—from conducting all-out suppression efforts to monitoring fires within predetermined areas in order to provide natural resource benefits. When a fire is reported, the agencies are to follow a principle of closest available resource, meaning that, regardless of jurisdiction, the closest available firefighting equipment and personnel respond. In instances when fires escape initial attack and grow large, the agencies respond using an interagency system that mobilizes additional firefighting assets from federal, state, and local agencies, as well as private contractors, regardless of which agency or agencies have jurisdiction over the burning lands. The agencies use an incident management system under which specialized teams are mobilized to respond to wildland fires, with the size and composition of the team determined by the complexity of the fire. Federal agencies typically fund the costs of these activities from their wildland fire suppression accounts. Fuel reduction. Fuel reduction refers to agencies’ efforts to reduce potentially hazardous vegetation that can fuel fires, such as brush and “ladder fuels” (i.e., small trees and other vegetation that can carry fire vertically to taller vegetation such as large trees), in an effort to reduce the potential for severe wildland fires, lessen the damage caused by fires, limit the spread of flammable invasive species, and restore and maintain healthy ecosystems. The agencies use multiple approaches for reducing this vegetation, including setting fires under controlled conditions (prescribed burns), mechanical thinning, herbicides, certain grazing methods, or combinations of these and other approaches. The agencies typically fund these activities from their fuel reduction accounts. Risk is an inherent element of wildland fire management. Federal agencies acknowledge this risk, and agency policies emphasize the importance of managing their programs accordingly. For example, Forest Service guidance states that “the wildland fire management environment is complex and possesses inherent hazards that can—even with reasonable mitigation—result in harm.” According to a 2013 Forest Service report on decision making for wildfires, risk management is to be applied at all levels of wildfire decision making, from the individual firefighter on the ground facing changing environmental conditions to national leaders of the fire management agencies weighing limited budgets against increasingly active fire seasons. For example, the report explains that, during individual wildland fires, risk can be defined as “a function of values, hazards, and probability.” Congress, the Office of Management and Budget, federal agency officials, and others have raised questions about the growing cost of federal wildland fire management. According to a 2015 report by Forest Service researchers, for example, the amount the Forest Service spends on wildland fire management has increased from 17 percent of the agency’s total funds in 1995 to 51 percent of funds in 2014. The report noted that this has come at the cost of other land management programs within the agency, such as vegetation and watershed management, some of which support activities intended to reduce future wildfire damage. From fiscal years 2004 through 2014, the Forest Service and Interior agencies obligated $14.9 billion for suppression, $13.4 billion for preparedness, and $5.7 billion for fuel reduction. Figure 3 shows the agencies’ total obligations for these three components of wildland fire management for fiscal years 2004 through 2014. After receiving its annual appropriation, the Forest Service allocates preparedness and fuel reduction funds to its nine regional offices, and those offices in turn allocate funds to individual field units (national forests and grasslands). Interior’s Office of Wildland Fire, upon receiving its annual appropriation, allocates preparedness and fuel reduction funds to BIA, BLM, FWS, and NPS. These agencies then allocate funds to their regional or state offices, which in turn allocate funds to individual field units (e.g. national parks or national wildlife refuges). The Forest Service and Interior agencies do not allocate suppression funding to their regions. These funds are managed at the national level. Federal wildland fire management policy has evolved over the past century in response to changing landscape conditions and greater recognition of fire’s role in maintaining resilient and healthy ecosystems. According to wildland fire historians, in the late 1800s and early 1900s, the nation experienced a series of large and devastating fires that burned millions of acres, including highly valued timber stands. In May 1908, federal legislation authorized the Forest Service to use any of its appropriations to fight fires. During the following decades, the Forest Service and Interior agencies generally took the view that fires were damaging and should be suppressed quickly, with policies and practices evolving gradually. For example, in 1935, the Forest Service issued the “10 a.m. policy,” which stated that whenever possible, every fire should be contained by 10 a.m. on the day after it was reported. In more remote areas, suppression policies had minimal effect until fire towers, lookout systems, and roads in the 1930s facilitated fire detection and fire deployment. The use of aircraft to drop fire retardants—that is, chemicals designed to slow fire growth—began in the 1950s, according to agency documents. Subsequent to the introduction of the 10 a.m. policy, some changes to agency policies lessened the emphasis on suppressing all fires, as some federal land managers took note of the unintended consequences of suppression and took steps to address those effects. In 1943, for example, the Chief of the Forest Service permitted national forests to use prescribed fire to reduce fuels on a case-by-case basis. In 1968, NPS revised its fire policy, shifting its approach from suppressing all fires to managing fire by using prescribed burning and allowing fires started by lightning to burn in an effort to accomplish approved management objectives. In 1978, the Forest Service revised its policy to allow naturally ignited fires to burn in some cases, and formally abandoned the 10 a.m. policy. Two particularly significant fire events—the Yellowstone Fires of 1988, in which approximately 1.3 million acres burned, and the South Canyon Fire of 1994, in which 14 firefighters lost their lives—led the agencies to fundamentally reassess their approach to wildland fire management and develop the Federal Wildland Fire Management Policy of 1995. Under the 1995 policy, the agencies continued to move away from their emphasis on suppressing every wildland fire, seeking instead to (1) make communities and resources less susceptible to being damaged by wildland fire and (2) respond to fires so as to protect communities and important resources at risk while considering both the cost and long-term effects of that response. The policy was reaffirmed and updated in 2001, and guidance for its implementation was issued in 2003 and 2009. In 2000, after one of the worst wildland fire seasons in 50 years, the President asked the Secretaries of Agriculture and the Interior to submit a report on managing the impact of wildland fires on communities and the environment. The report, along with congressional approval of increased appropriations for wildland fire management for fiscal year 2001, as well as other related activities, formed the basis of what is known as the National Fire Plan. The National Fire Plan emphasized the importance of reducing the buildup of hazardous vegetation that fuels severe fires, stating that unless hazardous fuels are reduced, the number of severe wildland fires and the costs associated with suppressing them would continue to increase. In 2003, Congress passed the Healthy Forests Restoration Act, with the stated purpose of, among other things, reducing wildland fire risk to communities, municipal water supplies, and other at- risk federal land through a collaborative process of planning, setting priorities for, and implementing fuel reduction projects. Along with the development of policies governing their responses to fire, the agencies developed a basic operational framework within which they manage wildland fire incidents. For example, to respond to wildland fires affecting both federal and nonfederal jurisdictions, firefighting entities in the United States have, since the 1970s, used an interagency incident management system. This system provides an organizational structure that expands to meet a fire’s complexity and demands, and allows entities to share firefighting personnel, aircraft, and equipment. Incident commanders who manage the response to each wildland fire may order firefighting assets through a three-tiered system of local, regional, and national dispatch centers. Federal, tribal, state, and local entities and private contractors supply the firefighting personnel, aircraft, equipment, and supplies which are dispatched through these centers. The agencies continue to use this framework as part of their approach to wildland fire management. Since 2009, the five federal agencies have made several changes in their approach to wildland fire management. The agencies have issued fire management guidance which, among other things, gave their managers greater flexibility in responding to wildland fires by providing for responses other than full suppression of fires. In collaboration with nonfederal partners such as tribal and state governments, they have also developed a strategy aimed at coordinating federal and nonfederal wildland fire management activities around common goals, such as managing landscapes for resilience to fire-related disturbances. In addition, Interior, and BLM in particular, have placed a greater emphasis on wildland fire management efforts in the sagebrush steppe ecosystem by issuing guidance and developing strategies aimed at improving the condition of this landscape. The agencies have also taken steps to change other aspects of wildland fire management, including changes related to improving fire management technology, line officer training, and firefighter safety. Agency officials told us the agencies are moving toward a more risk-based approach to wildland fire management. The extent to which the agencies’ actions have resulted in on-the-ground changes varied across agencies and regions, however, and officials identified factors, such as proximity to populated areas, that may limit their implementation of some of these actions. The agencies have increased their emphasis on using wildland fire to provide natural resource benefits rather than seeking to suppress all fires, in particular through issuing the 2009 Guidance for Implementation of Federal Wildland Fire Management Policy. Compared with interagency guidance issued in 2003, the 2009 guidance provided greater flexibility to managers in responding to wildland fire to achieve natural resource benefits for forests and grasslands, such as reducing vegetation densities and stimulating regeneration and growth in some species. The 2003 guidance stated that only one “management objective” could be applied to a single wildland fire—meaning that wildland fires could either be managed to meet suppression objectives or managed for continued burning to provide natural resource benefits, but not both. The 2003 guidance also restricted a manager’s ability to switch between full suppression and management for natural resource benefits, even when fire conditions changed. In contrast, under the 2009 interagency guidance, managers may manage individual fires for multiple objectives, and may change the management objectives on a fire as it spreads across the landscape. For example, managers may simultaneously attempt to suppress part of a fire that is threatening infrastructure or valuable resources while allowing other parts of the same fire to burn to achieve desired natural resource benefits. According to agency documents, the 2009 guidance was intended to reduce barriers to risk- informed decision making, allowing the response to be more commensurate with the risk posed by the fire, the resources to be protected, and the agencies’ land management objectives. However, agency officials varied in their opinions about the extent to which this guidance changed their management practices, with some telling us it marked a departure from their past practices, and others telling us it did not significantly change the way they managed wildland fire. Several headquarters and regional agency officials told us the guidance improved managers’ ability to address natural resource needs when managing a fire, rather than simply suppressing all fires. For example, BIA officials told us that the flexibility provided through the guidance allowed managers on the San Carlos Apache Reservation in southeastern Arizona to use a variety of management strategies to manage the 2014 Skunk Fire. According to a BIA fire ecologist, managers were able to maximize firefighter safety while fostering desirable ecological benefits, including helping to restore the historical fire regime to the area. In addition, Forest Service officials from several regions, including the Rocky Mountain and Intermountain Regions, told us they have used the full range of management options in the guidance more frequently over the last 5 years, and they credited the 2009 guidance for giving them the ability to manage fires and their associated risks. For example, during the 2011 Duckett Fire on the Pike-San Isabel National Forests in Colorado, managers attempted to contain part of the fire to protect a subdivision while allowing the portion of the fire uphill from the subdivision to burn into wilderness. Officials told us that, prior to the 2009 guidance, they would likely have responded to this fire by attempting full suppression, which could have put firefighters at risk at the upper part of the fire because of the steep and rugged terrain. In contrast, other officials told us the effect of the guidance was minimal because certain factors—including proximity to populated areas, size of the land management unit, and concerns about resources necessary to monitor fires—limit their ability to manage wildland fire incidents for anything other than suppression. For example, Forest Service officials from the Eastern Region told us that they try to use fire to provide natural resource benefits where possible, but they have fewer opportunities for doing so because of the smaller size of Forest Service land units in this region, which makes it more likely the fires will cross into nonfederal land, and their proximity to many areas of WUI. Similarly, Forest Service officials from the Pacific Southwest Region told us they are limited in using the added flexibility provided through the 2009 interagency guidance in Southern California, in part because the forests there are so close to major cities. However, in other more remote areas of California, these officials said they have managed wildland fires concurrently for one or more objectives, and objectives can change as the fire spreads across the landscape. Officials from BLM’s Utah State Office also told us that their changed landscape is a limiting factor in responding to wildland fire. Specifically, cheatgrass, a nonnative, highly flammable grass, has replaced much of the native vegetation of the sagebrush steppe ecosystem that used to exist on the lands they manage in western Utah. As a result, introducing fire into this area could be detrimental rather than helpful because cheatgrass’s flammability makes fires difficult to control. Several officials also told us that managing wildland fires for objectives beyond full suppression, as provided for in the 2009 guidance, is highly dependent on circumstance. Officials told us that allowing fires to burn requires the agencies to devote assets to monitoring the fires to prevent them from escaping, which—especially for long-duration fires—can reduce the assets available to respond to other fires that may occur. For example, in 2012, in response to what it predicted to be an expensive and above-normal fire season, the Forest Service issued guidance to its regions limiting the use of any strategy other than full suppression (i.e., any strategy that involved allowing fires to burn for natural resource benefits) for the remainder of that year. The Forest Service noted that it was issuing this guidance because of concerns about committing the assets necessary to monitor long-duration fires that were allowed to burn in order to provide natural resource benefits. In 2015, during the Thunder Creek fire in North Cascades National Park, concerns about the resources needed to monitor the fire if it were allowed to burn to provide natural resource benefits led NPS managers instead to order full suppression efforts to help ensure that the resources would be available for other fires. In a press release about the fire, NPS noted that experts anticipated a very high potential for wildfire in 2015, leading to agency concerns that significant fire activity throughout the west could leave few available firefighting resources later in the season. Another change since 2009 was the completion in 2014 of the National Cohesive Wildland Fire Management Strategy (Cohesive Strategy), developed in collaboration with partners from multiple jurisdictions (i.e., tribal, state, and local governments, nongovernmental partners, and public stakeholders) and aimed at coordinating wildland fire management activities around common wildland fire management goals. The agencies have a long history of collaboration with nonfederal partners in various aspects of wildland fire management, including mobilizing firefighting resources during wildland fire incidents and conducting fuel reduction projects across jurisdictions. The Cohesive Strategy is intended to set broad, strategic, nationwide direction for such collaboration. Specifically, the Cohesive Strategy provides a nationwide framework designed to more fully integrate fire management efforts across jurisdictions, manage risks, and protect firefighters, property, and landscapes by setting “broad, strategic, and national-level direction as a foundation for implementing actions and activities across the nation.” The vision of the Cohesive Strategy is “to safely and effectively extinguish fire, when needed; use fire where allowable; manage our natural resources; and as a nation, live with wildland fire.” The Cohesive Strategy identified three goals: (1) landscapes across all jurisdictions are resilient to fire-related disturbances in accordance with management objectives; (2) human populations and infrastructure can withstand wildfire without loss of life or property; and (3) all jurisdictions participate in developing and implementing safe, effective, and efficient risk-based wildfire management decisions. According to a senior Forest Service official, the Wildland Fire Leadership Council is responsible for providing a national, intergovernmental platform for implementing the strategy. In September 2014, an interim National Cohesive Strategy Implementation Task Group completed an implementation framework that included potential roles, responsibilities, and membership for a “national strategic committee” that is intended to provide oversight and leadership on implementing the strategy. Agency officials differed in the extent to which they viewed the Cohesive Strategy as having a significant effect on their wildland fire management activities. On the one hand, several headquarters and regional agency officials told us the Cohesive Strategy has improved wildland fire management. For example, Forest Service officials from the Southern Region told us the Cohesive Strategy has reinforced existing work that better enabled them to collaborate on new projects, which they told us is important because nearly 85 percent of the land base in the region is privately owned, and little could be achieved without collaboration. Forest Service officials cited one instance in which they signed a regional level agreement that will cover several state chapters of The Nature Conservancy to exchange resources for fuel reduction treatment and to promote public understanding of its benefits—an action they said was supported by the Cohesive Strategy. Similarly, Forest Service officials from the Intermountain Region told us about several efforts that have been implemented across their region that they attribute to the Cohesive Strategy. For example, in 2014, the Forest Service, the state of Utah, and other stakeholders collaborated on the implementation of Utah’s Catastrophic Wildfire Reduction Strategy, which aims to identify where fuel treatment across the state would be most beneficial. In contrast, many officials told us they have collaborated with partners for years and did not find the additional direction provided through the Cohesive Strategy to be much different than how they already operated. For example, several regional BLM, FWS, and NPS officials told us they have long worked with nonfederal partners on issues related to wildland fire management and that the Cohesive Strategy did not change those relationships. However, implementation of collaborative actions stemming from the Cohesive Strategy may be limited by such factors as differences in laws and policies among federal, tribal, state, and local agencies. For example, while the 2009 federal interagency guidance provided federal managers with additional flexibility in managing a single fire for multiple purposes, laws and regulations at the state and local levels typically require full suppression of all fires, according to the 2014 Quadrennial Fire Review. For example, according to California state law, state forest officials in California are “charged with the duty of preventing and extinguishing forest fires.” Since 2009, Interior and BLM have placed a greater emphasis on wildland fire management, restoration, and protection related to the sagebrush steppe ecosystem—particularly with respect to habitat for the greater sage-grouse. Several changes, including urbanization and increased infrastructure built in support of various activities (e.g., roads and power lines associated with oil, gas, or renewable energy projects), have altered the sagebrush steppe ecosystem in the Great Basin region of the western United States. In addition, the introduction and spread of highly flammable invasive nonnative grasses such as cheatgrass have altered this ecosystem by increasing the frequency and intensity of fire. As of July 2015, FWS was evaluating whether to list the greater sage- grouse, a species reliant on the sagebrush steppe ecosystem, as a threatened and endangered species under the Endangered Species Act. FWS has noted the importance of fire and fuel management activities in reducing the threat to sage-grouse habitat. Beginning in 2011, BLM issued guidance to its state offices emphasizing the importance of sage-grouse habitat in fire operations and the need for fuel reduction activities to address concerns about the habitat, more than half of which is located on BLM-managed lands. In 2014, the agency issued guidance reiterating this importance and stating that it would make changes in funding to allow field units to place greater focus on reducing fire’s threats in sage-grouse habitat areas. In January 2015, the Secretary of the Interior issued a Secretarial Order to enhance policies and strategies “for preventing and suppressing rangeland fire and for restoring sagebrush landscapes impacted by fire across the West.” The order established the Rangeland Fire Task Force and directed it to, among other things, complete a report on activities to be implemented ahead of the 2016 Western fire season. Under the order, the task force also was to address longer term actions to implement the policy and strategy set forth by the order. In a report issued in May 2015, An Integrated Rangeland Fire Management Strategy, the task force called for prepositioning firefighting assets where priority sage-grouse habitat exists, including moving assets from other parts of the country as available. The goal is to improve preparedness and suppression capability during initial stages of a wildfire to increase the chances of keeping fires small and reduce the loss of sage-grouse habitat. The report also identified actions aimed at improving the targeting of fuel reduction activities, including identifying priority landscapes and fuel management priorities within those landscapes. These actions are to be completed by the end of September 2015 and continuously improved upon in subsequent years. According to BLM state officials, the increased emphasis on sage-grouse habitat will significantly change how they manage their fuel reduction programs. BLM officials from states that include sage-grouse habitat said they expect a large increase in fuel reduction treatment funding and increased project approvals. In contrast, BLM officials from states without this habitat told us they expect significant funding decreases, limiting their capacity to address other resource issues important for nonsagebrush ecosystems. Since 2009, the agencies also have taken steps to change other areas of wildland fire management, including technology for wildland fire planning and response, line-officer training, and firefighter safety. Since 2009, the agencies have applied new technologies to improve wildland fire management planning and response. Prominent among them is the Wildland Fire Decision Support System (WFDSS), a Web- based decision-support tool that assists fire managers and analysts in making strategic and tactical decisions for fire incidents. WFDSS replaced older tools, some of which had been used for more than 30 years and were not meeting current fire management needs, according to the system’s website. According to this site, WFDSS has several advantages over the older systems, such as enabling spatial data layering, increasing use of map displays, preloading information about field units’ management objectives, and allowing for use in both single and multiple fire situations. Officials from several agencies told us that using WFDSS improved their ability to manage fires by allowing information from fire management plans to be loaded into WFDSS and providing substantial real-time fire information on which to make decisions. For example, one Forest Service official told us that, at one point in a recent particularly active fire season in the Pacific Northwest Region, the system processed information on approximately 20 concurrent fires that managers could monitor in real time. As a result, they were able to make strategic and risk-informed decisions about the resource allocations needed for each fire, including decisions to let some fires burn to meet natural resource benefit objectives. According to Forest Service reviews of several fires that occurred in 2012, however, some managers said WFDSS did not provide effective decision support for firefighters because the system underestimated fire behavior or did not have current information. According to officials from several agencies, another example of updated wildland fire technology has been the replacement of traditional paper- based fire management plans with electronic geospatial-based plans. Federal wildland fire management policy directs each agency to develop a fire management plan for all areas they manage with burnable vegetation. A fire management plan, among other things, identifies fire management goals for different parts of a field unit. According to an interagency document describing geospatial-based plans, agency officials expect such plans to increase efficiency because the plans can more easily be updated to account for changes in the landscape resulting from fires, fuel reduction treatments, and other management activities. In addition, the electronic format is designed to allow plans to more easily be shared across multiple users, including personnel responding to wildland fires. Agency officials mentioned other technological improvements, such as the development of an “Enterprise Geospatial Portal” providing wildland fire data in geospatial form using a Web-based platform, although many officials also told us that additional improvements are needed in wildland fire technology overall. In addition to specific technologies, in 2012 the Forest Service and Interior issued a report titled “Wildland Fire Information and Technology: Strategy, Governance, and Investments,” representing the agencies’ efforts to develop a common wildland fire information and technology vision and strategy. The agencies signed a Memorandum of Understanding later that same year intended to establish a common management approach for information and technology services. Nevertheless, the 2014 Quadrennial Fire Review concluded that the wildland fire management community does not have an agenda for innovation and technology adoption or a list of priorities, stating that the wildland fire community “sometimes struggles to define common technology priorities and implement integrated, enterprise-level solutions” and noting that there are more than 400 information technology systems in use by the wildland fire community. The report provides recommendations on actions the agencies could consider for improvement; however, because it was issued in May 2015, it is too early to determine what, if any, actions the agencies have taken. In commenting on a draft of this report, Interior stated that the agencies are completing an investment strategy for wildland fire applications and supporting infrastructure, but did not provide an expected date for its completion. Officials from several agencies told us that, since 2009, the agencies have increased training efforts, particularly those aimed at improving line officers’ knowledge about, and response to, wildland fires. Line officers are land unit managers such as national forest supervisors, BLM district managers, and national park superintendents. During a wildland fire, staff from “incident management teams” with specific wildland firefighting and management training manage the response, and line officers associated with the land unit where the fire is occurring must approve major decisions that incident management teams make during the response. Officials at BLM’s Oregon/Washington State Office, for example, told us they provide line officers with day-long simulation exercises, as well as shadowing opportunities that give line officers experience on actual wildland fires. Beginning in 2007, the Forest Service initiated a Line Officer Certification Program and began a coaching and mentoring program to provide on-the-ground experience for preparing line officers to act as agency administrators during wildland fires or other critical incidents. This program is aimed at providing officials that do not have wildland fire experience the opportunity to work under the advisement of a coach with wildland fire experience. According to Forest Service documents, this program has evolved substantially, in part to address the increased demand for skills necessary to manage increasingly complex wildland fires. In May 2015, the Forest Service issued guidance for the program and called for each Forest Service regional office to administer it within the regions. Officials told us that, since 2009, the agencies have, in some cases, changed firefighting tactics to better protect firefighters, including making greater use of natural barriers to contain fire instead of attacking fires directly. The agencies have also issued additional guidance aimed at emphasizing the primacy of firefighter safety. In 2010, the agencies developed and issued the “Dutch Creek Protocol” (named after a wildland fire where a firefighter died), which provided a standard set of protocols for wildland firefighting teams to follow during an emergency medical response or when removing and transporting personnel from a location on a fire. Both the Forest Service and Interior have also issued agency direction stating that firefighter safety should be the priority of every fire manager. The agencies assess the effectiveness of their wildland fire management programs in several ways, including through performance measures, efforts to assess specific activities, and reviews of specific wildland fire incidents. Both the Forest Service and Interior are developing new performance measures and evaluations, in part to help better assess the results of their current emphasis on risk-based management, according to agency officials. In addition, the agencies have undertaken multiple efforts, such as studies, to assess the effectiveness of activities including fuel reduction treatments and aerial firefighting. The agencies also conduct reviews of their responses to wildland fires. However, they have not consistently followed agency policy in doing so or used specific criteria for selecting the fires they have reviewed, limiting their ability to help ensure that their fire reviews provide useful information and meaningful results. Both the Forest Service and Interior use various performance measures, such as the number of WUI acres treated to reduce fuels and the percentage of wildland fires contained during initial attack, to assess their wildland fire management effectiveness. These measures are reported in, among other things, the agencies’ annual congressional budget justifications. Officials from both the Forest Service and Interior told us their performance measures need improvement to more appropriately reflect their approach to wildland fire management and, in June 2015, officials from both agencies told us that they were working to improve them. For example, several performance measures for both agencies use a “stratified cost index” to help analyze suppression costs on wildfires. The index is based on a model that compares the suppression costs of fires that have similar characteristics, such as fire size, fuel types, and proximity to communities, and identifies the percentage of fires with suppression costs that exceeded the index. We found in a June 2007 report, however, that the index was not entirely reliable and that using the index as the basis for comparison may not allow the agencies to accurately identify fires where more, or more-expensive, resources than needed were used. The agencies continue to use the index, but have acknowledged its shortcomings. The Forest Service reported in its fiscal year 2016 budget justification to Congress that improvements were forthcoming. In April 2015, Forest Service officials told us they have incorporated detailed geospatial information into the model on which the index is based to help yield more accurate predictions of suppression expenditures and have submitted the model for peer review. Once that is complete, the agencies plan to begin to implement the updated model, but officials did not provide a time frame for doing so. Both agencies have also made efforts to improve their performance measures to better reflect their emphasis on a risk-based approach to wildland fire management. In fiscal year 2014, Interior began using a new performance measure intended to better reflect a variety of strategies in addition to full suppression: “Percent of wildfires on DOI-managed landscapes where the initial strategy (ies) fully succeeded during the initial response phase.” The same year, the Forest Service began developing a performance measure intended to reflect that, in some cases, allowing naturally-ignited fires to burn can provide natural resource benefits at a lower cost and lower risk to personnel than fully suppressing the fire as quickly as possible: “Percent of acres burned by natural ignition with resource benefits.” Forest Service officials told us they are working with field units to evaluate whether this measure will effectively assess their efforts to implement a risk-based approach to fire management and that they will adjust it as needed. The officials told us they plan to finalize the measure and use it in 2017. Also, in fiscal year 2014, the Forest Service began developing a performance measure that would assess the risk that wildland fire presents to highly valued resources such as communities and watersheds. This measure is known as the “National Forest System wildfire risk index.” According to the agency’s fiscal year 2016 budget justification, it would create an index of relative fire risk based on the likelihood of a large fire affecting these highly valued resources. It may also incorporate factors measuring the relative importance of these resources and the expected effects that might occur from fire. The Forest Service plans to establish a national baseline measure for this index in 2015 and then periodically remeasure it, likely every 2 years, to determine if overall risk has been reduced, according to Forest Service officials. Changes that could affect the index include those resulting from fuel reduction treatments, wildland fire, forest management activities, vegetative growth, and increased WUI development, among others, according to the agency’s 2016 budget justification. As with the performance measure described above, agency officials told us they will evaluate whether the measure meets their needs before adopting it; if it meets their needs, they plan to finalize the measure and use it in 2017. The agencies have also undertaken multiple efforts to assess the effectiveness of particular activities, such as fuel reduction and aerial firefighting. Regarding fuel reduction activities, we found in September 2007 and September 2009 that demonstrating the effectiveness of fuel reduction treatments is inherently complex and that the agencies did not have sufficient information to evaluate fuel treatment effectiveness, such as the extent to which treatments changed fire behavior. Without such information, we concluded that the agencies could not ensure that fuel reduction funds were directed to the areas where they can best minimize risk to communities and natural and cultural resources. Accordingly, we recommended that the agencies take actions to develop additional information on fuel treatment effectiveness. While the agencies took steps to address this recommendation, they are continuing efforts to improve their understanding of fuel treatment effectiveness. For example, the Forest Service and Interior agencies use a system called Fuel Treatment Effectiveness Monitoring to document and assess fuel reduction treatment effectiveness. The Forest Service began requiring such assessments in 2011 and Interior requested such assessments be completed starting in 2012. Under this approach, the agencies are to complete a monitoring report whenever a wildfire interacts with a fuel treatment and enter the information into the system. Officials told us that additional efforts are under way to help understand other aspects of fuel treatment effectiveness. For example, in February 2015, the Joint Fire Science Program completed its strategy to implement the 2014 Fuel Treatment Science Plan. It includes as one of its goals the “development of measures/metrics of effectiveness that incorporate ecological, social, resilience, and resource management objectives at the regional and national level.” The Forest Service and Interior are also implementing an effort known as the Aerial Firefighting Use and Effectiveness Study, begun in 2012 to address concerns about limited performance information regarding the use of firefighting aircraft. As part of this effort, the agencies are collecting information on how aerial retardant and suppressant delivery affects fire behavior and plan to use this and other collected information to track the performance of specific aircraft types, according to the study website. This will help the agencies identify ways to improve their current fleet of aircraft and inform future aerial firefighting operations and aviation strategic planning, according to the website. Agency officials told us the study is not a one-time activity, but is an ongoing effort to continually provide information to help improve their use of firefighting resources. The Forest Service and the Interior agencies have conducted reviews to assess their effectiveness in responding to wildland fires but have not consistently followed agency policy in doing so and did not always use specific criteria for selecting the fires they have reviewed. Officials from both the Forest Service and Interior told us that current agency policy regarding fire reviews overly emphasizes the cost of wildland fire suppression rather than the effectiveness of their response to fire. However, the agencies have neither updated their policies to better reflect their emphasis on effectiveness nor established specific criteria for selecting fires for review and conducting the reviews. By developing such criteria, the agencies may enhance their ability to obtain useful, comparable information about their effectiveness in responding to wildland fires, which, in turn, may help them identify needed improvements in their wildland fire approach. Congressional reports and agency policy have generally called for the agencies to review their responses to wildland fires involving federal expenditures of $10 million or more. For fiscal years 2003 through 2010, congressional committee reports directed the Forest Service and Interior to conduct reviews of large fire incidents, generally for the purpose of understanding how to better contain suppression costs; beginning in fiscal year 2006, these reports included a cost threshold, specifying that such reviews be conducted for fires involving federal expenditures of $10 million or more. The agencies, in turn, have each developed their own policies that generally direct them to review each fire that exceeds the $10 million threshold. The agencies, however, have not consistently conducted reviews of fire incidents meeting the $10 million threshold, in part because, according to officials, current agency policy that includes the $10 million threshold does not reflect the agencies’ focus on assessing the effectiveness of their response to fire. However, the agencies have not developed specific criteria for selecting fire incidents for review. Forest Service officials told us that, rather than selecting all fires with federal expenditures of $10 million or more, they changed their approach to selecting fires to review. These officials told us that focusing exclusively on suppression costs when selecting fires limits the agency in choosing those fires where it can obtain important information and best assess management actions and ensure they are appropriate, risk-based, and effective. Forest Service officials told us the agency judgmentally selects incidents to review based on a range of broad criteria, such as complexity and national significance, taking into account political, social, natural resource, or policy concerns. Using these broad selection criteria, the Forest Service reviewed 5 wildland fires that occurred in 2012 and 10 that occurred in 2013. However, with these broad criteria it is not clear why the Forest Service selected those particular fires and not others. For example, the 2013 Rim Fire, which cost over $100 million to suppress—by far the costliest fire to suppress that year—and burned over 250,000 acres of land, was not among the 2013 fires selected for review. Moreover, the reviews completed for each of those years did not use consistent or specific criteria for conducting the reviews. As of July 2015, the agency had not selected the fires it will review from the 2014 wildland fire season and, when asked, agency officials did not indicate a time frame for doing so. Forest Service officials told us they believe it is appropriate to judgmentally select fires to provide them flexibility in identifying which fires to review and which elements of the fire response to analyze. Nevertheless, Forest Service officials also acknowledged the need to develop more specific criteria for selecting fires to review and conducting the reviews and, in July 2015, told us they are working to update their criteria for doing so. They provided us a draft update of the Forest Service policy manual, but this draft did not contain specific criteria for selecting fires for review or conducting the reviews. Moreover, officials did not provide a time frame for completing their update. Within Interior, BLM officials told us BLM completed its last fire review based on significant cost (i.e., federal expenditures of $10 million or more) in 2013. These officials told us that BLM, similar to the Forest Service, plans to shift the emphasis of its fire reviews to evaluate management actions rather than focusing on cost, and that officials are working to determine criteria for selecting fires for review. Interior headquarters officials told us that FWS and NPS have continued to follow the direction provided through their policies regarding reviews of fires that met the $10 million threshold. Interior headquarters officials, however, acknowledged the need to improve Interior’s approach to selecting fires for review to focus more on information about decision making rather than fire costs. In July 2015, the officials told us they plan to develop criteria other than cost for use by all Interior agencies in selecting fires to review, and that they plan to develop standard criteria for implementing the reviews. They stated that they expect this department-wide effort to be completed by the end of calendar year 2015 but did not provide information about how they planned to develop such criteria or the factors they would consider. Agency reports have likewise cited the need to improve both the processes for selecting fires for review and the implementation of the reviews. A 2010 report, for example, noted the importance of improving the selection of fires to review and stated that the agencies would benefit from a more productive review strategy. The report said the agencies’ existing approach to conducting reviews tended to produce isolated efforts and unrelated recommendations rather than establishing a consistent foundation for continuous improvement. A 2013 report assessing the usefulness of the Forest Service’s five reviews of 2012 fires noted shortcomings in consistency across the reviews, including unclear criteria for selecting fires and conducting reviews, as well as limitations in the specificity of the resulting reports and recommendations. As noted, both agencies have acknowledged the need to improve their criteria for selecting fires to review and conducting the reviews. By developing specific criteria in agency policies for selecting fires for review and conducting the reviews, the agencies may enhance their ability to help ensure that their fire reviews provide useful information and meaningful results. This is consistent with our previous body of work on performance management, which has shown that it is important for agencies to collect performance information to inform key management decisions, such as how to identify problems and take corrective actions and how to identify and share effective approaches. By collecting such performance information, the agencies may be better positioned to identify needed improvements in their wildland fire approach and thereby use their limited resources more effectively. The Forest Service and Interior determine the distribution of fire management resources in part on the basis of historical amounts but are developing new methods intended to better reflect current conditions. For suppression, the Forest Service and Interior manage funding as needed for units to respond to individual wildland fires. For preparedness, the Forest Service and Interior distribute resources based, in part on historical funding levels generated by an obsolete system. The agencies are working to replace the system and develop new tools to help them distribute resources to reflect current landscape conditions, values at risk, and the probability of wildland fire. For fuel reduction, until recently, the Forest Service and Interior both distributed funds using the same system. In 2014, the Forest Service began using a new system to help it distribute fuel reduction funding in ways that better reflect current conditions. Interior is working to develop a system that likewise reflects current conditions. The agencies manage funding for suppression at the national level as needed for field units to respond to individual wildland fires. The overall amount of suppression funding the agencies obligate is determined by the complexity and number of wildland fire responses over the course of the fiscal year and can vary considerably from year to year. For example, federal agencies obligated approximately $1.7 billion for suppression in fiscal year 2006, $809 million in fiscal year 2010, and $1.9 billion in fiscal year 2012. (See app. II for more detailed information about suppression obligations by the Forest Service and the Interior agencies for fiscal years 2004 through 2014.) Each year, the agencies estimate the expected level of funding for suppression activities using the average of the previous 10 years of suppression obligations. The estimated amount, however, has often been less than the agencies’ actual suppression obligations, particularly for the Forest Service. In all but 2 years since 2000, Forest Service suppression obligations have exceeded the 10-year average that forms the basis of the agency’s annual appropriation. To pay for wildfire suppression activities when obligations are greater than the amount appropriated for suppression, the Forest Service and Interior may transfer funds from other programs within their respective agencies as permitted by law. As we found in a prior report, these transfers can affect the agencies’ ability to carry out other important land management functions that are key to meeting their missions, such as restoration of forest lands and other improvements. For example, according to a Forest Service report, funding transfers led to a canceled fuel reduction project on the Sante Fe National Forest and the deferral of critical habitat acquisition on the Cibola National Forest, both located in New Mexico. In their annual budget justifications for fiscal years 2015 and 2016, the agencies proposed an alternative mechanism to fund suppression activities. Under that proposal, the agencies would receive 70 percent of the standard 10-year average of suppression obligations as their appropriation for wildland fire suppression, which reflects the amount the agencies spend to suppress approximately 99 percent of wildland fires. If suppression obligations exceed this amount, additional funds would be made available from a disaster funding account. Forest Service and Interior officials told us this proposal would allow them to better account for the variable nature of wildland fire seasons and reduce or eliminate the need to transfer funds from other accounts to pay for suppression. In addition, legislation pending in Congress would change how certain wildland fire suppression operations are funded. The Forest Service and Interior distribute preparedness funding to their regions and agencies, respectively, based in part on information generated from a system that is now obsolete. The agencies attempted to develop a new system to distribute preparedness funding, but ended that effort in 2014 and are now working to develop different tools and systems. In distributing preparedness funds to individual forests, some Forest Service regions have developed additional tools to help them distribute funds; similarly, three of the four Interior agencies have developed additional tools to help them distribute preparedness funds to their regions. Overall preparedness obligations in 2014 totaled about $1.0 billion for the Forest Service and about $274 million for the Interior agencies. (See app. II for detailed information on each of the agencies’ obligations for preparedness for fiscal years 2004 through 2014.) To determine the distribution of preparedness funds from Forest Service headquarters to its regions, and from Interior to the department’s four agencies with wildland fire management responsibilities, the Forest Service and Interior rely primarily on amounts that are based on results from a budgeting system known as the National Fire Management Analysis System (NFMAS). That system, however, was terminated in the early 2000s, according to agency officials. Relying on the results from the last year NFMAS was used, and making only incremental changes from year to year, the Forest Service and Interior have not made significant shifts in the funding distribution across their respective regions and agencies over time, and they have generally maintained the same number and configuration of firefighting assets (e.g., fire engines and crews) in the same geographic areas from year to year. Several agency officials, however, told us that these amounts no longer reflect current conditions, in part because of changes to the landscape resulting from increased human development, climate change, and changes to land management policies that consider natural resource values differently than they did when NFMAS was in use. Beginning in 2002, the agencies attempted to replace NFMAS with an interagency system designed to help them determine the optimal mix and location of firefighting assets and distribute funds accordingly. In developing this system, known as the Fire Program Analysis system, the agencies’ goal was to develop “a comprehensive interagency process for fire planning and budget analysis identifying cost-effective programs to achieve the full range of fire management goals and objectives.” According to agency documents, this effort proved problematic because of the difficulty in modeling various aspects of wildland fire management. In addition, agency officials told us it is difficult to design a system that could account for multiple agencies’ different needs and varying missions. After more than a decade of work, and investment that Forest Service officials estimated at approximately $50 million, the agencies terminated the system’s development in September 2014. At that time, they stated that it “only delivered inconsistent and unacceptable results.” Since the termination of the Fire Program Analysis system, the agencies have continued to rely on results based on the terminated NFMAS, but have begun working on new tools to help them distribute funding and assets based on current conditions and updated information. Forest Service headquarters officials told us the agency is developing a new tool called the Wildland Fire Investment Portfolio System. According to these officials, this proposed system is intended to model scenarios such as large shifts in firefighting assets, various potential dispatch procedures, and changes in fire behavior due to climate change, which will allow managers, both at the national and individual unit level, to conduct resource trade-off analyses and assess whether assets are being used effectively. Forest Service officials told us that the agency is in the early stages of developing this proposed system and anticipates using it for planning and analysis purposes in fiscal year 2016. Interior documents state that Interior is developing a system called the Risk-Based Wildland Fire Management model, which Interior will use to help support funding distribution decisions to the four Interior agencies for both preparedness and fuel reduction. The proposed system will assess the probability and likely intensity of wildland fire, values at risk, and the expected value of acres likely to burn. A key element of this system will be the development of strategic business plans by each of the four Interior agencies, detailing how each agency intends to distribute its preparedness and fuel reduction funding to reduce the risks from wildland fire on its lands. Interior officials said that, once the agencies provide these business plans, Interior will assess them in making funding distribution decisions among the agencies. According to several Interior agency officials, identifying priority values at risk across Interior’s four agencies may be challenging given the variation in agency missions and the types of lands they manage. For example, a threatened species located primarily on BLM lands may be among BLM’s highest priorities, but a forested area relied upon by an Indian tribe for its livelihood may be among BIAs’ highest priorities. Interior officials told us that they expect to identify the prioritized values and issue guidance on the proposed system by the end of calendar year 2015, and then use its results to inform their fiscal year 2016 funding distributions to the four agencies. Once the Forest Service distributes preparedness funding to regions, it gives regions discretion to determine how to subsequently distribute funding to individual national forests, as long as those determinations are consistent with policy and annual budget program direction. Forest Service headquarters officials told us they do not plan to direct regions to use any specific system to help inform distributions to national forests, so that regions can have flexibility in distributing their funds and take into account local conditions and priorities. According to agency officials, most regions distribute funding to individual national forests based on historical amounts resulting from NFMAS. However, two regions have changed the way they determine funding distribution to individual national forests to better reflect current landscape conditions. The Rocky Mountain Region uses a new system that ranks each of its forests according to a “risk priority score.” According to regional officials, use of the system has resulted in shifts in funding across forests in the region; for example, the officials told us they have provided additional resources to forests along Colorado’s Front Range because of increased development in the WUI. The Pacific Northwest Region also uses its own funding distribution tool, which considers elements such as fire occurrence and the number of available assets to develop a weighted value for each forest in the region. The region distributes the funding proportionally based on the values calculated for each forest. Once obtaining preparedness funds from Interior, each agency—which, as noted, have their own land management responsibilities and missions—distributes these funds to its units. Three of these agencies— BLM, FWS, and NPS—use newer systems and current information, such as updated fuel characterization and fire occurrence data, to distribute funding to their regional offices. The fourth agency, BIA, generally uses historical-based amounts (i.e., NFMAS results), but has made some changes to reflect updated priorities. The regions subsequently distribute funding to individual land units, typically using the same systems. The four agencies’ approaches are described below. BLM. Since 2010, BLM officials told us they have used results from the Fire Program Decision Support System to help determine funding distributions to state offices. The system analyzes BLM’s fire workload and complexity using four components: fire suppression workload, fuel types, human risk, and additional fire resources, and assigns scores to state offices accordingly. Based on the resulting analyses, BLM has shifted funding across state offices to help better reflect current conditions. BLM officials told us that most states use the new system to help inform the distribution of funding to their units. BLM is also developing an additional component of the Fire Program Decision Support System to help offices determine the appropriate number of firefighting assets needed in each area. Officials expect to apply the new component with their overall system in the fall of 2015. FWS. In 2014, FWS began distributing its preparedness funding to regions using the Preparedness Allocation Tool. Officials told us that the tool uses information such as historical wildland fire occurrence, proximity to WUI areas, and other information, to inform preparedness funding distributions to regions. Agency officials told us that results from this tool did not generally identify the need for large funding shifts across units, but rather helped identify some smaller shifts to better reflect current landscape conditions. Officials with one FWS region told us that the tool has helped the agency provide better assurance that funding amounts are risk-based and transparent. NPS. Since 2013, primarily in response to their overall wildland fire management program funding reductions, NPS began using a system called the Planning Data System to determine what level of firefighting workforce the agency could afford under different budget distribution scenarios. The system generates personnel requirements for each NPS unit by establishing a minimum number of people for any unit that meets certain criteria. Those results are rolled up to also provide regional workforce requirements. The results generated from this system showed that some NPS regions, as well as individual park units, had existing wildland fire organizations that they could no longer adequately support in light of reduced budgets. BIA. BIA relies primarily on historical funding amounts derived from a system similar to NFMAS. However, BIA officials told us they have made adjustments to the historical amounts using professional judgment. BIA officials told us that the regions also still primarily use historical-based amounts to distribute funding to their units. The officials told us they will wait until Interior finalizes its Risk Based Wildland Fire Management model before they develop a new funding distribution tool. Beginning in 2009, the Forest Service and Interior both used systems collectively known as the Hazardous Fuels Prioritization and Allocation System (HFPAS) to distribute fuel reduction funds. Officials told us these systems, based on similar concepts and approaches, were developed by the agencies to provide an interagency process for distributing fuel reduction funding to the highest-priority projects. Starting in 2014, the Forest Service instead began using a new system, which, according to officials, allows the agency to more effectively distribute fuel reduction funds. Interior continues to distribute fuel reduction funding to the four agencies based on funding amounts derived from HFPAS, but it plans to develop a new system for distributing funds to reflect more current conditions and risks. Overall fuel reduction obligations in 2014 totaled about $302 million for the Forest Service and about $147 million for the Interior agencies. (See app. II for detailed information on the agencies’ fuel reduction obligations for fiscal years 2004 through 2014.) Forest Service officials told us their new system identifies locations where the highest probability of wildland fire intersects with important resources, such as residential areas and watersheds critical to municipal water supplies. These officials told us the new system allows the agency to invest its fuel reduction funds in areas where there are both a high probability of wildland fires and important resources at risk. In contrast, according to officials, HFPAS in some cases prioritized funding for areas where important resources, such as extensive WUI, existed but where the potential for wildland fires was low. The new system has identified locations for funding adjustments to Forest Service regions. For example, in 2015 the agency’s Eastern and Southern Regions received a smaller proportion of fuel reduction funding than they had previously received, and some western regions saw increases, because results from the system showed that the western regions had more areas with both important resources and high wildland fire potential. The Forest Service directs its regions to distribute fuel reduction funding to national forests using methods consistent with national information, as well as with specific local data. A senior Forest Service official told us that, as a result, most regions distribute funding to individual national forests based on information generated using HFPAS, augmented with local data. One region has developed a more updated distribution approach. Specifically, in 2012, the Rocky Mountain Region, in conjunction with the Rocky Mountain Research Station and Forest Service headquarters, developed a fuel reduction funding distribution tool that generates a risk priority score for each forest in the region. The risk priority score is based on fire probability, resources at risk from fire, potential fire intensity, and historical fire occurrence. Each forest’s risk priority score is used to inform the region’s distribution of funding to the national forests. Interior currently distributes fuel reduction funding to its agencies based on the funding amounts derived from HFPAS results that were last generated in 2013. Interior officials also told us they plan to stop using HFPAS results and are planning to use the new system they are developing, the Risk-Based Wildland Fire Management model, to reflect current information on conditions and risks in distributing fuel reduction funds. Within Interior, officials from the four agencies told us they have developed, or are in the process of developing, funding distribution systems and tools while they wait for Interior to complete the Risk-Based Wildland Fire Management model. BLM, for example, uses a fuel reduction funding distribution tool that maps values at risk, including WUI, critical infrastructure, sagebrush habitat, and invasive species data. BLM combines this information with data on wildland fire probability to create a spatial illustration of the values at risk relative to potential fire occurrence. BLM then uses the results of this analysis to fund its state offices. BIA uses its own tool to distribute fuel reduction funding to its regions based on wildland fire potential data generated by the Forest Service. That information is then combined with fire occurrence history and workload capacity to generate a model that shows potential fire risk and capacity across BIA units. FWS officials told us they are developing a fuel reduction funding distribution tool, expected to be used for fiscal year 2016, which considers fire risks associated with each FWS unit. FWS officials told us this tool will identify risk reduction over longer periods of time, contain an accountability function to monitor results, and will share many attributes with FWS’ preparedness allocation tool. NPS officials told us the agency will continue to rely on historical amounts, based largely on HFPAS. Similar to the previous Interior distribution approach, NPS distributes funding for specific projects identified at the headquarters level. However, if a unit is not able to implement an identified project, the unit can substitute other projects, as necessary. Faced with the challenge of working to protect people and resources from the unwanted effects of wildland fire while also recognizing that fire is an inevitable part of the landscape, the federal wildland fire agencies have taken steps aimed at improving their approaches to wildland fire management. Their 2009 update to interagency guidance, for example, was designed to continue moving away from the agencies’ decades-long emphasis on suppressing all fires, by giving fire managers more flexibility in responding to fires. In addition, the agencies are working to develop more up-to-date systems for distributing wildland fire resources. A central test of such changes, however, is the extent to which they help ensure appropriate and effective agency responses to fires when they occur. The agencies have acknowledged the importance of reviewing their responses to individual wildland fires to understand their effectiveness and identify possible improvements. However, the agencies have not systematically followed agency policy regarding such fire reviews and, in the reviews they have conducted, they have not used specific criteria in selecting fires and conducting the reviews. Officials from both the Forest Service and Interior told us cost alone should not be the basis for such reviews and have acknowledged the need to improve their criteria for selecting fires and conducting reviews. Draft guidance provided by the Forest Service did not contain specific criteria for such reviews, however, and Interior officials did not provide information about how they planned to develop criteria or the factors they would consider. By developing specific criteria for selecting fires to review and conducting the reviews, and making commensurate changes to agency policies to help ensure the criteria are consistently applied, the agencies may enhance their ability to ensure that their fire reviews provide useful information and meaningful results. This, in turn, could better position them to identify improvements in their approach to wildland fire management and thereby use their limited resources more effectively. To better ensure that the agencies have sufficient information to understand the effectiveness of their approach to wildland fires, and to better position them to develop appropriate and effective strategies for wildland fire management, we recommend that the Secretaries of Agriculture and the Interior direct the Chief of the Forest Service and the Director of the Office of Wildland Fire to take the following two actions: Develop specific criteria for selecting wildland fires for review and for conducting the reviews as part of their efforts to improve their approach to reviewing fires, and Once such criteria are established, revise agency policies to align with the specific criteria developed by the agencies. We provided a draft of this report for review and comment to the Departments of Agriculture and the Interior. The Forest Service (responding on behalf of the Department of Agriculture) and Interior generally agreed with our findings and recommendations, and their written comments are reproduced in appendixes IV and V respectively. Both agencies stated that they are developing criteria for selecting fires to review and conducting reviews. Both agencies also provided technical comments which we incorporated into our report as appropriate. Interior also provided additional information about wildland fire technology, which we likewise incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Agriculture and the Interior, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to the report are listed in appendix VI. This report examines (1) key changes the federal wildland fire agencies have made in their approach to wildland fire management since 2009, (2) how the agencies assess the effectiveness of their wildland fire management programs, and (3) how the agencies determine the distribution of their wildland fire management resources. To perform this work, we reviewed laws, policies, guidance, academic literature, and reviews related to federal wildland fire management. These included the 1995 Federal Wildland Fire Management Policy and subsequent implementation guidance, the Interagency Standards for Fire and Fire Aviation Operations, and the 2009 and 2014 Quadrennial Fire Reviews. We also interviewed headquarters officials from each of the five federal land management agencies responsible for wildland fire management—the Forest Service in the Department of Agriculture and the Bureau of Indian Affairs (BIA), Bureau of Land Management (BLM), Fish and Wildlife Service (FWS), and National Park Service (NPS) in the Department of the Interior—as well as Interior’s Office of Wildland Fire. We also conducted semistructured interviews of regional officials in each of the agencies to obtain information about issues specific to particular regions and understand differences across regions. We interviewed wildland fire management program officials from each of the 9 Forest Service regional offices, 11 of BLM’s 12 state offices, and 2 regional offices each for BIA, FWS, and NPS. We focused these regional interviews primarily on the Forest Service and BLM because those agencies receive the greatest percentage of appropriated federal wildland fire funding. For BIA, FWS, and NPS, we selected the two regions from each agency that received the most funds in those agencies—BIA’s Northwest and Western Regions, FWS’s Southwest and Southeast Regions, and NPS’s Pacific West and Intermountain Regions. We conducted a total of 25 semistructured interviews of regional offices. During these semistructured interviews we asked about (1) significant changes to the agencies’ approach to wildland fire management, including regional efforts to implement the policy areas identified in the 2009 interagency Guidance for Implementation of Federal Wildland Fire Management Policy, (2) agency efforts to assess the effectiveness of their wildland fire management activities, and (3) agency processes for determining the distribution of fire management resources. We focused our review on three primary components of wildland fire management— suppression, preparedness, and fuel reduction—because they account for the highest spending amounts among wildland fire management activities. To address our first objective, we reviewed agency documents, such as policy and guidance, as well as other documents such as agency budget justifications, to identify changes the agencies have made to their approach to managing wildland fire since 2009, efforts the agencies have undertaken to address wildland fire management challenges, agency- identified improvements resulting from those changes, and challenges associated with implementing them. Our review focuses on changes since 2009 because we last completed a comprehensive review of wildland fire management in that year, and because the agencies’ last significant change to interagency wildland fire management guidance for implementing the Federal Wildland Fire Management Policy also occurred that year. To further our understanding of these issues, we also asked about these changes in our interviews with agency headquarters officials. In particular, we asked about the extent to which changes to the agencies’ wildland fire management approaches have occurred or are planned, the effects of these changes, and associated challenges. In addition, we relied on the semistructured interviews of regional officials described above to understand how the regions implemented national direction and policy. We analyzed the responses provided to us during the interviews to identify common themes about prominent changes since 2009, and challenges associated with implementing those changes. The information we report represents themes that occurred frequently in our interviews with both regional and headquarters officials. We did not report on changes described during our interviews that were not directly related to wildland fire management, such as changes to general workforce management policies. To address our second objective, we reviewed agency strategic plans and budget justifications describing performance measures, as well as other documents associated with agency efforts to assess their programs, including fire reviews. We also reviewed legislative and agency direction related to fire reviews, including agency policies and the Interagency Standards for Fire and Fire Aviation Operations, and reviewed reports resulting from fire reviews conducted by the agencies since 2009. We compared agency practices for conducting fire reviews to direction contained in relevant agency policy. We also interviewed headquarters officials to identify the agencies’ key performance measures and the extent to which those measures reflect changing approaches to wildland fire management. In our interviews with headquarters and regional officials, we also inquired about other mechanisms the agencies use to determine the effectiveness of their wildland fire management programs, as well as any changes they are making in this area. To obtain additional insight into the use of performance information on the part of federal agencies, we also reviewed our previous reports related to agencies’ use of performance information. To address our third objective, we reviewed relevant agency budget documentation, including annual budget justifications and documentation of agency obligations, as well as information about the tools and systems the agencies use to distribute funds and resources. We did not assess the design or use of any of the agencies’ tools or systems for distributing funds. We interviewed agency officials at the headquarters and regional levels to identify the processes they use for budget formulation and resource distribution. We asked about the extent to which these processes have changed in recent years at the headquarters and regional levels for each of the five agencies and the extent to which they have changed funding and resource amounts. We also obtained data from the Forest Service and from Interior’s Office of Wildland Fire on obligations for each of the three primary wildland fire management components— suppression, preparedness, and fuel reduction—from fiscal years 2004 through 2014, analyzing the data in both nominal (actual) and constant (adjusted for inflation) terms. Adjusting nominal dollars to constant dollars allows the comparison of purchasing power across fiscal years. To adjust for inflation, we used the gross domestic product price index with 2014 as the base year. We reviewed budget documents and obligation data provided by the agencies, and interviewed agency officials knowledgeable about the data, and we found the data sufficiently reliable for the purposes of this report. We conducted this performance audit from August 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides information on preparedness, fuel reduction, and suppression obligations by the Forest Service and the Department of the Interior’s four wildland fire agencies—the Bureau of Indian Affairs, Bureau of Land Management, Fish and Wildlife Service, and National Park Service—for fiscal years 2004 through 2014. Figures 4, 5, and 6 show overall agency obligations for preparedness, fuel reduction, and suppression for fiscal years 2004 through 2014. Individual agencies’ obligations for each of the three programs are described later in this appendix. Table 1 and figure 7 show annual Forest Service wildland fire management obligations for fiscal years 2004 through 2014. Preparedness obligations increased from nearly $760 million in fiscal year 2004 to about $1.0 billion in fiscal year 2014, an average increase of 3.2 percent per year, or 1.2 percent after adjusting for inflation. Fuel reduction obligations increased from about $284 million in fiscal year 2004 to about $302 million in fiscal year 2014, an average annual increase of 0.6 percent, or a 1.4 percent decrease after adjusting for inflation. Suppression obligations fluctuated from year to year, with a high of about $1.4 billion in fiscal year 2012 and a low of about $525 million in fiscal year 2005. Table 2 and figure 8 show annual Bureau of Indian Affairs wildland fire management obligations for fiscal years 2004 through 2014. Preparedness obligations decreased from nearly $58 million in fiscal year 2004 to about $51 million in fiscal year 2014, an average annual decrease of 1.3 percent per year, or 3.2 percent after adjusting for inflation. Fuel reduction obligations decreased from about $39 million in fiscal year 2004 to about $30 million in fiscal year 2014, an average annual decrease of 2.6 percent, or 4.5 percent after adjusting for inflation. Suppression obligations fluctuated from year to year, with a high of about $105 million in fiscal year 2012 and a low of about $43 million in fiscal year 2010. Table 3 and figure 9 show annual Bureau of Land Management wildland fire management obligations from fiscal years 2004 through 2014. Preparedness obligations increased from nearly $152 million in fiscal year 2004 to about $160 million in fiscal year 2014, an average annual increase of 0.6 percent per year, or a 1.4 percent decrease after adjusting for inflation. Fuel reduction obligations decreased from about $98 million in fiscal year 2004 to about $75 million in fiscal year 2014, an average annual decrease of 2.6 percent, or 4.6 percent after adjusting for inflation. Suppression obligations fluctuated from year to year, with a high of about $299 million in fiscal year 2007 and a low of about $130 million in fiscal year 2009. Table 4 and figure 10 show annual Fish and Wildlife Service wildland fire management obligations for fiscal years 2004 through 2014. Preparedness obligations decreased from about $33 million in fiscal year 2004 to about $27 million in fiscal year 2014, an average annual decrease of 2.1 percent per year, or 4.1 percent after adjusting for inflation. Fuel reduction obligations decreased from about $24 million in fiscal year 2004 to about $21 million in fiscal year 2014, an average annual decrease of 1.5 percent, or 3.5 percent after adjusting for inflation. Suppression obligations fluctuated from year to year, with a high of about $41 million in fiscal year 2011 and a low of about $4 million in fiscal year 2010. Table 5 and figure 11 show annual National Park Service wildland fire management obligations for fiscal years 2004 through 2014. Obligations for preparedness increased from about $35 million in fiscal year 2004 to about $36 million in fiscal year 2014, an average annual increase of 0.5 percent per year, or a 1.5 percent decrease after adjusting for inflation. Fuel reduction obligations decreased from about $31 million in fiscal year 2004 to about $21 million in fiscal year 2014, an average annual decrease of 3.7 percent, or 5.6 percent after adjusting for inflation. Suppression obligations fluctuated from year to year, with a high of about $58 million in fiscal year 2006 and a low of about $22 million in fiscal year 2009. The Forest Service and the Department of the Interior use different approaches for paying the base salaries of their staff during wildland fire incidents. For periods when firefighters are dispatched to fight fires, the Forest Service generally pays its firefighters’ base salaries using suppression funds, whereas Interior pays its firefighters’ base salaries primarily using preparedness funds. Forest Service officials told us that under this approach, regional offices, which are responsible for hiring firefighters in advance of the fire season, routinely hire more firefighters than their preparedness budgets will support, assuming they can rely on suppression funds to pay the difference. Forest Service officials told us that their funding approach helps the agency maintain its firefighting capability over longer periods of time during a season and accurately track the overall costs of fires. Interior officials told us they choose to use preparedness funds to pay their firefighters’ base salaries during a wildland fire because it constitutes a good business practice. According to a Wildland Fire Leadership Council document, in 2003, the council agreed that the agencies would use a single, unified approach and pay firefighters’ base salary using Interior’s method of using preparedness funds. However, the council subsequently noted that in 2004 the Office of Management and Budget directed the Forest Service to continue using suppression funds to pay firefighters’ base salaries. The agencies have used separate approaches since 2004. In addition to the individual named above, Steve Gaty (Assistant Director), Ulana M. Bihun, Richard P. Johnson, Lesley Rinner, and Kyle M. Stetler made key contributions to this report. Important contributions were also made by Cheryl Arvidson, Mark Braza, William Carrigg, Carol Henn, Benjamin T. Licht, Armetha Liles, and Kiki Theodoropoulos. Wildland Fire Management: Improvements Needed in Information, Collaboration, and Planning to Enhance Federal Fire Aviation Program Success. GAO-13-684. Washington, D.C.: August 20, 2013. Station Fire: Forest Service’s Response Offers Potential Lessons for Future Wildland Fire Management. GAO-12-155. Washington, D.C.: December 16, 2011. Arizona Border Region: Federal Agencies Could Better Utilize Law Enforcement Resources in Support of Wildland Fire Management Activities. GAO-12-73. Washington, D.C.: November 8, 2011. Wildland Fire Management: Federal Agencies Have Taken Important Steps Forward, but Additional Action Is Needed to Address Remaining Challenges. GAO-09-906T. Washington, D.C.: July 21, 2009. Wildland Fire Management: Federal Agencies Have Taken Important Steps Forward, but Additional, Strategic Action Is Needed to Capitalize on Those Steps. GAO-09-877. Washington, D.C.: September 9, 2009. Wildland Fire Management: Actions by Federal Agencies and Congress Could Mitigate Rising Fire Costs and Their Effects on Other Agency Programs. GAO-09-444T. Washington, D.C.: April 1, 2009. Forest Service: Emerging Issues Highlight the Need to Address Persistent Management Challenges. GAO-09-443T. Washington, D.C.: March 11, 2009. Wildland Fire Management: Interagency Budget Tool Needs Further Development to Fully Meet Key Objectives. GAO-09-68. Washington, D.C.: November 24, 2008. Wildland Fire Management: Federal Agencies Lack Key Long- and Short-Term Management Strategies for Using Program Funds Effectively. GAO-08-433T. Washington, D.C.: February 12, 2008. Forest Service: Better Planning, Guidance, and Data Are Needed to Improve Management of the Competitive Sourcing Program. GAO-08-195. Washington, D.C.: January 22, 2008. Wildland Fire Management: Better Information and a Systematic Process Could Improve Agencies’ Approach to Allocating Fuel Reduction Funds and Selecting Projects. GAO-07-1168. Washington, D.C.: September 28, 2007. Natural Hazard Mitigation: Various Mitigation Efforts Exist, but Federal Efforts Do Not Provide a Comprehensive Strategic Framework. GAO-07-403. Washington, D.C.: August 22, 2007. Wildland Fire: Management Improvements Could Enhance Federal Agencies’ Efforts to Contain the Costs of Fighting Fires. GAO-07-922T. Washington, D.C.: June 26, 2007. Wildland Fire Management: A Cohesive Strategy and Clear Cost- Containment Goals Are Needed for Federal Agencies to Manage Wildland Fire Activities Effectively. GAO-07-1017T. Washington, D.C.: June 19, 2007. Wildland Fire Management: Lack of Clear Goals or a Strategy Hinders Federal Agencies’ Efforts to Contain the Costs of Fighting Fires. GAO-07-655. Washington, D.C.: June 1, 2007. Department of the Interior: Major Management Challenges. GAO-07-502T. Washington, D.C.: February 16, 2007. Wildland Fire Management: Lack of a Cohesive Strategy Hinders Agencies’ Cost-Containment Efforts. GAO-07-427T. Washington, D.C.: January 30, 2007. Biscuit Fire Recovery Project: Analysis of Project Development, Salvage Sales, and Other Activities. GAO-06-967. Washington, D.C.: September 18, 2006. Wildland Fire Rehabilitation and Restoration: Forest Service and BLM Could Benefit from Improved Information on Status of Needed Work. GAO-06-670. Washington, D.C.: June 30, 2006. Wildland Fire Suppression: Better Guidance Needed to Clarify Sharing of Costs between Federal and Nonfederal Entities. GAO-06-896T. Washington, D.C.: June 21, 2006. Wildland Fire Suppression: Lack of Clear Guidance Raises Concerns about Cost Sharing between Federal and Nonfederal Entities. GAO-06-570. Washington, D.C.: May 30, 2006. Wildland Fire Management: Update on Federal Agency Efforts to Develop a Cohesive Strategy to Address Wildland Fire Threats. GAO-06-671R. Washington, D.C.: May 1, 2006. Natural Resources: Woody Biomass Users’ Experiences Provide Insights for Ongoing Government Efforts to Promote Its Use. GAO-06-694T. Washington, D.C.: April 27, 2006. Natural Resources: Woody Biomass Users’ Experiences Offer Insights for Government Efforts Aimed at Promoting Its Use. GAO-06-336. Washington, D.C.: March 22, 2006. Wildland Fire Management: Timely Identification of Long-Term Options and Funding Needs Is Critical. GAO-05-923T. Washington, D.C.: July 14, 2005. Natural Resources: Federal Agencies Are Engaged in Numerous Woody Biomass Utilization Activities, but Significant Obstacles May Impede Their Efforts. GAO-05-741T. Washington, D.C.: May 24, 2005. Natural Resources: Federal Agencies Are Engaged in Various Efforts to Promote the Utilization of Woody Biomass, but Significant Obstacles to Its Use Remain. GAO-05-373. Washington, D.C.: May 13, 2005. Technology Assessment: Protecting Structures and Improving Communications during Wildland Fires. GAO-05-380. Washington, D.C.: April 26, 2005. Wildland Fire Management: Progress and Future Challenges, Protecting Structures, and Improving Communications. GAO-05-627T. Washington, D.C.: April 26, 2005. Wildland Fire Management: Forest Service and Interior Need to Specify Steps and a Schedule for Identifying Long-Term Options and Their Costs. GAO-05-353T. Washington, D.C.: February 17, 2005. Wildland Fire Management: Important Progress Has Been Made, but Challenges Remain to Completing a Cohesive Strategy. GAO-05-147. Washington, D.C.: January 14, 2005. Wildland Fires: Forest Service and BLM Need Better Information and a Systematic Approach for Assessing the Risks of Environmental Effects. GAO-04-705. Washington, D.C.: June 24, 2004. Federal Land Management: Additional Guidance on Community Involvement Could Enhance Effectiveness of Stewardship Contracting. GAO-04-652. Washington, D.C.: June 14, 2004. Wildfire Suppression: Funding Transfers Cause Project Cancellations and Delays, Strained Relationships, and Management Disruptions. GAO-04-612. Washington, D.C.: June 2, 2004. Biscuit Fire: Analysis of Fire Response, Resource Availability, and Personnel Certification Standards. GAO-04-426. Washington, D.C.: April 12, 2004. Forest Service: Information on Appeals and Litigation Involving Fuel Reduction Activities. GAO-04-52. Washington, D.C.: October 24, 2003. Geospatial Information: Technologies Hold Promise for Wildland Fire Management, but Challenges Remain. GAO-03-1047. Washington, D.C.: September 23, 2003. Geospatial Information: Technologies Hold Promise for Wildland Fire Management, but Challenges Remain. GAO-03-1114T. Washington, D.C.: August 28, 2003. Wildland Fire Management: Additional Actions Required to Better Identify and Prioritize Lands Needing Fuels Reduction. GAO-03-805. Washington, D.C.: August 15, 2003. Wildland Fires: Forest Service’s Removal of Timber Burned by Wildland Fires. GAO-03-808R. Washington, D.C.: July 10, 2003. Forest Service: Information on Decisions Involving Fuels Reduction Activities. GAO-03-689R. Washington, D.C.: May 14, 2003. Wildland Fires: Better Information Needed on Effectiveness of Emergency Stabilization and Rehabilitation Treatments. GAO-03-430. Washington, D.C.: April 4, 2003. Major Management Challenges and Program Risks: Department of the Interior. GAO-03-104. Washington, D.C.: January 1, 2003. Results-Oriented Management: Agency Crosscutting Actions and Plans in Border Control, Flood Mitigation and Insurance, Wetlands, and Wildland Fire Management. GAO-03-321. Washington, D.C.: December 20, 2002. Wildland Fire Management: Reducing the Threat of Wildland Fires Requires Sustained and Coordinated Effort. GAO-02-843T. Washington, D.C: June 13, 2002. Wildland Fire Management: Improved Planning Will Help Agencies Better Identify Fire-Fighting Preparedness Needs. GAO-02-158. Washington, D.C.: March 29, 2002. Severe Wildland Fires: Leadership and Accountability Needed to Reduce Risks to Communities and Resources. GAO-02-259. Washington, D.C.: January 31, 2002. Forest Service: Appeals and Litigation of Fuel Reduction Projects. GAO-01-1114R. Washington, D.C.: August 31, 2001. The National Fire Plan: Federal Agencies Are Not Organized to Effectively and Efficiently Implement the Plan. GAO-01-1022T. Washington, D.C.: July 31, 2001. Reducing Wildfire Threats: Funds Should be Targeted to the Highest Risk Areas. GAO/T-RCED-00-296. Washington, D.C.: September 13, 2000. Fire Management: Lessons Learned From the Cerro Grande (Los Alamos) Fire. GAO/T-RCED-00-257. Washington, D.C.: August 14, 2000. Fire Management: Lessons Learned From the Cerro Grande (Los Alamos) Fire and Actions Needed to Reduce Fire Risks. GAO/T-RCED-00-273. Washington, D.C.: August 14, 2000.
Wildland fire plays an important ecological role in maintaining healthy ecosystems. Over the past century, however, various land management practices, including fire suppression, have disrupted the normal frequency of fires and have contributed to larger and more severe wildland fires. Wildland fires cost billions to fight each year, result in loss of life, and cause damage to homes and infrastructure. In fiscal years 2009 through 2014, the five federal wildland fire agencies obligated a total of $8.3 billion to suppress wildland fires. GAO was asked to review multiple aspects of federal wildland fire management across the five federal wildland fire management agencies. This report examines (1) key changes the federal wildland fire agencies have made in their approach to wildland fire management since 2009, (2) how the agencies assess the effectiveness of their wildland fire management programs, and (3) how the agencies determine the distribution of their wildland fire management resources. GAO reviewed laws, policies, and guidance related to wildland fire management; reviewed agency performance measures; analyzed obligation data for fiscal years 2004 through 2014; and interviewed officials from the five agencies, as well as Interior's Office of Wildland Fire. Since 2009, the five federal agencies responsible for wildland fire management—the Forest Service within the Department of Agriculture and the Bureau of Indian Affairs, Bureau of Land Management, Fish and Wildlife Service, and National Park Service in the Department of the Interior—have made several key changes in their approach to wildland fire management. One key change was the issuance of agency guidance in 2009 that provided managers with more flexibility in responding to wildland fires. This change allowed managers to consider different options for response given land management objectives and the risk posed by the fire. The agencies also worked with nonfederal partners to develop a strategy aimed at coordinating wildland fire management activities around common goals. The extent to which the agencies' steps have resulted in on-the-ground changes varied across agencies and regions, however, and officials identified factors, such as proximity to populated areas, that may limit their implementation of some changes. The agencies assess the effectiveness of their wildland fire management programs in several ways, including through performance measures and reviews of specific wildland fires. The agencies are developing new performance measures, in part to help better assess the results of their current emphasis on risk-based management, according to agency officials. However, the agencies have not consistently followed agency policy regarding fire reviews, which calls for reviews of all fires resulting in federal suppression expenditures of $10 million or more, nor have they used specific criteria for the reviews they have conducted. GAO has previously found that it is important for agencies to collect performance information to inform key management decisions and to identify problems and take corrective actions. Forest Service and Interior officials said focusing only on suppression costs does not allow them to identify the most useful fires for review, and they told GAO they are working to improve their criteria for selecting fires to review and conducting these reviews. Forest Service officials did not indicate a time frame for their efforts, and while they provided a draft update of their policy manual, it did not contain specific criteria. Interior officials told GAO they expect to develop criteria by the end of 2015, but did not provide information about how they planned to develop such criteria or the factors they would consider. By developing specific criteria for selecting fires to review and conducting reviews, and making commensurate changes to agency policies, the agencies may enhance their ability to help ensure that their fire reviews provide useful information about the effectiveness of their wildland fire activities. The Forest Service and Interior determine the distribution of fire management resources for three primary wildland fire activities of suppression, preparedness, and fuel reduction in part on the basis of historical funding amounts. For suppression, the Forest Service and Interior manage suppression funding as needed for responding to wildland fires, estimating required resources using the average of the previous 10 years of suppression obligations. For preparedness and fuel reduction, the Forest Service and Interior distribute resources based primarily on historical amounts. Both are working to distribute resources in ways that better reflect current conditions, including developing new systems that they stated they plan to begin using in fiscal year 2016. GAO recommends that the agencies develop specific criteria for selecting wildland fires for review and conducting the reviews, and revise agency policies accordingly. The agencies generally agreed with GAO's findings and recommendations.
You are an expert at summarizing long articles. Proceed to summarize the following text: Since 1955, the executive branch has encouraged federal agencies to obtain commercially available goods and services from the private sector when the agency determines that it is cost-effective. In 1966, OMB formalized this policy in its Circular A-76 and, in 1979, issued a handbook with procedures for determining whether commercial activities should be performed in-house, by another federal agency, or by the private sector. Administrative and legislative constraints from the late 1980s through 1995 resulted in a lull in awarding contracts under A-76 competitions. In 1995, when congressional and administration initiatives placed greater emphasis on public-private competitions to achieve economies and efficiency of operations, DOD gave competitive sourcing renewed emphasis. In our past work, we have found that DOD achieved savings through competitive sourcing, although it is difficult to estimate precisely the amount of savings. By including competitive sourcing as one of five governmentwide initiatives announced in August 2001, the administration directed agencies to implement competitive sourcing programs to achieve increased savings and to improve performance. The administration continues to advocate the use of competitive sourcing, which is addressed in the President’s budget for fiscal year 2005. Competitive sourcing has met with considerable controversy in both the public and private sectors. Each sector expressed concerned that, in general, the process was unfair and did not provide for holding the winner of the competition accountable for performance. In response to this controversy, in 2000, the Congress mandated a study of the government’s competitive sourcing process under A-76—a study conducted by the Commercial Activities Panel, chaired by the Comptroller General of the United States. The panel included representatives from OMB, DOD, the Office of Personnel Management, private industry, academia, a trade association, and unions. In April 2002, the panel released its report with recommendations that included 10 sourcing principles to provide a better foundation for competitive sourcing decisions in the federal government (see app. II). In particular, the panel stressed the importance of linking sourcing policy with agency missions, promoting sourcing decisions that provide value to the taxpayer regardless of the service provider selected, and ensuring greater accountability for performance. The panel also addressed an area of particular importance for all affected partieshow the government’s sourcing policies are implemented. In this regard, one of the sourcing principles was that the government should avoid arbitrary numerical or full-time equivalent (FTE) goals. This principle is based on the concept that success in government programs should be measured in terms of providing value to the taxpayer, not the size of the in-house or contractor workforce. The panel, in one of its 10 sourcing principles, also endorsed creating incentives and processes to foster high-performing, efficient, and effective organizations and continuous improvement throughout the federal government. On November 6, 2003, the Comptroller General hosted a forum to discuss what it means for a federal agency to be high-performing in an environment where results and outcomes are increasingly accomplished through partnerships that cut across different levels of government and different sectors of the economy. There was broad agreement among participants at the forum on the key characteristics and capabilities of high-performing organizations, which are organized around four broad themes. These four themes are (1) clear, well-articulated, and compelling missions; (2) strategic use of partnerships; (3) a focus on the needs of clients and customers; and (4) strategic management of people. The competitive sourcing process starts with agencies developing inventories of their commercial positions in accordance with the Federal Activities Inventory Reform (FAIR) Act of 1998. Additionally, OMB requires agencies to identify activities that are inherently governmental, as well as commercial positions that are exempt from competition because of legislative prohibitions, agency restructuring, or other reasons. Only activities classified as “commercial” and not otherwise exempt are potentially competable. In the 2002 FAIR Act inventories, the proportion of competable commercial, non-competable commercial, and inherently governmental FTE positions varied widely among the agencies we reviewed. Governmentwide, competable commercial positions in 2002 accounted for approximately 26 percent of the total federal workforce. Except for the Education Department’s 62 percent, the percentage of competable commercial positions in each of our selected agencies was less than 50 percent of the agency’s total FTEs (see app. III). After agencies identify competable commercial positions under the FAIR Act and OMB guidance, they select from these positions which ones to compete. Resulting public-private competitions are guided by OMB Circular A-76. In May 2003, OMB released a revised Circular A-76. Under this revised circular, agencies must use a standard competition process for functions with more than 65 FTEs. As part of the standard process, agencies identify the work to be performed in a performance work statement, establish a team to prepare an in-house proposal to perform the work based on a “most efficient organization” (MEO), and evaluate that proposal along with those submitted by private companies and/or public reimbursable sources. For activities with 65 or fewer FTEs, agencies may use either a streamlined or standard competition. Streamlined competitions require fewer steps than the standard process and enable agencies to complete a cost comparison more quickly. When the President announced competitive sourcing as one of five governmentwide management agenda items in August 2001, few agencies other than DOD had an established competitive sourcing infrastructure—a key component of OMB’s strategy for institutionalizing competitive sourcing. Few of the other departments and agencies that we reviewed had competitive sourcing experience. Since that time, all six civilian agencies we reviewed have established a basic competitive sourcing program infrastructure. Leadership involvement and an established infrastructure have enabled each agency that we reviewed to develop competitive sourcing plans and complete a number of initial competitions. Interagency forums for sharing information also have been established. Although they lack DOD’s A-76 experience, the civilian agencies we reviewed have made significant progress toward establishing a competitive sourcing infrastructure with such actions as establishing an office, hiring staff, obtaining contractor support, creating policies and procedures, and providing training to agency staff involved in the competitive sourcing process. Table 1 provides an overview of civilian agency infrastructure development. In addition, DOD, which has the most competitive sourcing experience in the federal government, has issued numerous policies, procedures, and guidance for implementing OMB’s Circular A-76. DOD also has established a management structure to oversee the department’s A-76 activities. In carrying out its competitive sourcing program, DOD uses both in-house personnel and contractors to provide assistance within the department in developing performance work statements and MEOs. In response to our previous recommendation, DOD also has established a Web site to share competitive sourcing knowledge and experience. This Web site is available governmentwide. The site contains resources such as A-76 policy and procedures, best practices, sample documents, bid protests, and links to other sites with information on Circular A-76. The civilian agencies we reviewed completed their initial rounds of competitive sourcing studies in fiscal years 2002 and 2003 (see app IV). Based on data given to us by five of the six civilian departments, 602 studies were completed in fiscal year 2003. Of these 602 studies, 363 were streamlined competitions and 130 were direct conversions to performance by a contractor. In addition, DOD completed 126 studies, including 54 direct conversions and 7 streamlined competitions. Collectively, these studies involved over 17,000 FTEs, with almost 57 percent of the FTEs studied by DOD and the remaining 43 percent studied by the civilian agencies. According to agency data, in-house teams won many of the competitions, retaining almost 76 percent of the FTEs covered by the studies. (See app. V for details on the outcome of these studies.) While agencies have been able to complete these studies while establishing their infrastructures, it is too early to assess the impact of the studies in terms of efficiencies or performance improvements achieved. A number of initiatives have been undertaken to share competitive sourcing information across agencies. In addition to DOD’s Web site, at least two interagency forums have been established to facilitate interagency information sharing. For example, staff working in competitive sourcing offices in various agencies and subagencies meet monthly at the civilian agencies’ competitive sourcing working group to exchange ideas and information. The Federal Acquisition Council—composed of senior acquisition officials in the Executive Branch—also promotes acquisition-related aspects of the President’s Management Agenda by providing a forum for monitoring and improving the federal acquisition system. The Council has published a guide on frequently asked questions and a manager’s guide to competitive sourcing. In addition, OMB is developing a competitive sourcing data tracking system to provide consistent information and to facilitate the sharing of competitive sourcing information by allowing agencies to identify planned, ongoing, and completed competitions across the government. According to OMB officials, future refinements to the system may allow agencies to track and manage their own sourcing activities—a problem for most agencies—as well as provide OMB with consistent information. OMB plans to use the system to monitor agency implementation of the competitive sourcing initiative and generate more consistent and accurate statistics, including costs and related savings, for reporting to the Congress. Despite their progress in establishing a competitive sourcing infrastructure and conducting initial competitions in varying degrees, the agencies we reviewed continue to face significant challenges in four areas. First, agencies have been challenged to develop and use FAIR Act inventory data to identify and group positions for competition. Second, agencies are operating in a continually changing environment and under OMB guidance focused more on meeting milestones rather than achieving desired outcomes. Third, agencies have reported that they lack the staff needed to carry out the numerous additional tasks required under the new Circular A-76. Finally, agencies have reported that they lack the funding needed to cover the substantial costs associated with implementing their programs. The development of accurate FAIR Act inventories is the foundation for determining which functions agencies compete. Agencies reported difficulty in classifying positions as inherently governmental or commercial and in applying OMB-assigned codes to categorize activities, making it challenging for them to identify potential candidates for competitions. This has been a persistent problem as we have reported in the past. Despite changes made to OMB’s guidance for constructing FAIR Act inventories, the guidance has not alleviated the difficulties some agencies have had in developing and maintaining useful inventory data. Under the FAIR Act and OMB guidance, agencies annually review and classify positions as either inherently governmental or commercial. This classification process is done using an OMB-provided coding schedule containing nearly 700 functional codes in 23 major categories, such as health services, grants management, and installation services. Civilian agencies are having difficulty applying these functional codes, which were developed by DOD. While intended to promote consistency, the codes are not always applicable to civilian agencies, requiring some to create supplemental codes to match their missions. As we have previously reported, selecting and grouping functions and positions to compete can be difficult. For example, the Army has determined that many functions, such as making eyeglasses for troops located in a war zone, are core to its mission even though this function may not be classified as inherently governmental when performed in the United States. Also, some functions may involve both “commercial” and “inherently governmental” tasks. While agencies have had difficulty classifying mixed positions, OMB’s guidance allows agencies to take a variety of approaches to address this difficulty. For example, according to agency officials, the Internal Revenue Service classifies mixed positions on a case-by-case basis considering how critical the position is to its mission, not just the percentage of tasks related to that position that may be inherently governmental or commercial. The process also can be resource intensive. For example, according to agency officials, to determine whether positions should be classified as inherently governmental or commercial, the National Park Service—the largest bureau in the Department of the Interior—used an employee team of approximately 30 individuals that represented all occupational areas, as well as its human resources and acquisition staff. The team used the analysis, in conjunction with payroll system data showing employee time usage, to determine the number of commercial and inherently governmental FTEs. Accuracy of inventories depends on agency classification of positions, based on OMB guidance, as well as consistent OMB review of inventories. OMB has updated its FAIR Act inventory guidance annually to address issues identified by agencies (see app. VI) and it consults with agencies to resolve issues identified. For example, in April 2001, OMB created a new requirement to report civilian positions designated as inherently governmental. OMB’s guidance gives agencies considerable latitude in preparing their inventories to determine if an activity is commercial. OMB officials told us they have provided training on Circular A-76 procedures to its budget examiners, who act as liaisons between OMB and each participating agency. The examiners address questions and provide guidance on an agency-by-agency basis. OMB does not have formal written guidance for reviewing FAIR Act data. Examiners provide verbal guidance on an on-going basis to agencies and discuss concerns agencies have with the FAIR Act and the related competitive sourcing program. Once agencies submit their inventories, OMB officials review the inventories looking for “red flags”—that is, deviations from the norm, such as one agency listing a position as inherently governmental while others classify the same position as commercial—and then consult with agency officials as necessary on these deviations. However, a number of competitive sourcing officials at two interagency forums expressed concern about the process. For example, one official told us that an OMB program examiner said there were too many function codes in one agency’s inventory. Then, after the agency resubmitted its inventory, the same examiner said the inventory had too few codes. An official from another agency told us that its OMB examiners did not appear familiar with OMB’s own guidance for applying the function codes. Given the lack of formal written OMB guidance on reviewing the FAIR Act inventory data, there is little assurance that OMB’s review of inventories will be consistent across agencies. According to a number of agency officials, implementation of OMB guidance is further complicated due to time constraints. OMB inventory guidance is typically issued in the spring, and agency inventories are due to OMB by June 30. Officials contend that more time is needed to properly implement the guidance. In response, OMB officials pointed out that the basic guidance for developing inventories is set forth in Circular A-76 and agencies can undertake significant steps to prepare their inventories based on the Circular’s guidance. The ultimate goal of the competitive sourcing initiative is to improve government performance and efficiency. To date, however, OMB’s competitive sourcing guidance to federal agencies has focused more on targets and milestones for conducting competitions than on the outcomes the competitions are designed to produce: savings, innovation, and performance improvements. Although recent OMB guidance has stressed the need for agencies to be more strategic, the emphasis in the guidance is still more on process than results. The President’s Management Agenda established expected results for the competitive sourcing initiative to encourage innovation, increase efficiency, and improve performance of agencies. The Commercial Activities Panel similarly stated that the success of government programs, such as competitive sourcing, should be measured by the results achieved in terms of providing value to the taxpayer. Since the inception of the competitive sourcing initiative in 2001, agencies have faced continual changes to OMB’s targets and guidance for conducting public-private competitions. OMB initially set a target for agencies to compete or directly convert at least 5 percent of their full-time equivalent commercial positions by the end of fiscal year 2002, and an additional 10 percent by the end of fiscal year 2003. They also set a long-term target for agencies to compete at least 50 percent of commercial FTEs. OMB later moved to agency-specific plans that reflect each agency’s own mission and workforce mix. OMB also developed a traffic light system (red, yellow, green) for evaluating the progress agencies are making in implementing these plans. Table 2 shows the chronology of these changes. As shown in table 2, in December 2003, OMB released a memorandum with guidance on developing competitive sourcing plans that would receive a “green” rating under its traffic light evaluation system (see app. VII). The guidance notes the need for a long-range vision, strategic action by agencies, and public-private competitions tailored to the agency’s unique mission and goals. The memorandum also advises agencies to include in their plans their general decision-making process for selecting activities to compete, identification of activities to be competed, potential constraints, and plans for handling activities suitable for competition that the agency does not intend to compete. Neither OMB’s initial FTE-based goals nor its revised competitive sourcing goals and traffic light evaluation system calls for agencies to assess how their plans for competitive sourcing could achieve the broader improvements envisioned by the President’s Management Agenda or the Commercial Activities Panel. In this regard, the Panel said that arbitrary competition goals should be avoided. In testimony before the Congress, the Comptroller General has stated that OMB’s initial competition targets were inappropriate. Similarly, OMB’s revised goals continue to emphasize process milestones such as competitions completed more than enhancing value through performance improvements and efficiencies. For example, for an agency to receive a “green” rating on OMB’s scorecard, it must have developed an OMB-approved green competition plan, have publicly announced standard competitions in accordance with the schedule in its green plan, and have completed 95 percent of streamlined competitions in 90 days. The emphasis throughout OMB’s most recent guidance is similarly more on process than on results. Agencies have used a range of criteria to select positions for competition. For most agencies, selection criteria have been based on the size and composition of the workforce, such as attrition rates, skill needs, and difficulty in hiring, as well as the agency’s capability to manage the competitions. Because these agencies have focused on meeting targets to announce and complete competitions, they have not assessed broader issues, such as weighing potential improvements against the costs and risks associated with performing the competitions. Some agencies, however, used a broader set of factors such as the function’s contribution to the mission, risks associated with the function being contracted out, and the potential return on investment. (See app. VIII for further discussion on the criteria these agencies have used to select positions for competition.) Officials in most of the agencies we reviewed expressed concern that they lack sufficient staff to perform the additional tasks included in the recently revised Circular A-76. To address this challenge, the Federal Acquisition Council is currently studying agency staffing and skill requirements. As we previously reported, agencies need to build and maintain capacity to manage competitions, build the in-house MEO, and oversee the implementation of competition decisions—skills that the Commercial Activities Panel recognized may require additional capacity. Adding to this complexity is agencies’ need to consider their competitive sourcing staffing capacity in the context of their strategic human capital management, an area we have identified as high-risk governmentwide and one of the five President’s Management Agenda governmentwide initiatives. For example, we recently reported that DOD’s civilian human capital strategic plan does not address the respective roles of civilian and contractor personnel or how DOD plans to link its human capital initiatives with its sourcing plans, such as efforts to outsource non-core responsibilities. Finally, ensuring and maintaining employee morale is also a challenge for agencies. OMB’s revised Circular A-76 emphasizes the following key competitive sourcing phases: preparing an inventory of agency’s activities, preliminary planning, announcing and conducting the competition, conducting the competition using either a streamlined or standard competition process, implementing the performance decision, and conducting post-competition accountability activities (see fig. 1). Each phase involves a number of tasks. According to agency officials, many of these tasks require skills and human capital resources beyond those currently available. As we reported in December 2002, in the current environment, acquisition staff can no longer simply be purchasers or process managers. Rather, they need to be adept at analyzing business problems and in helping to develop acquisition strategies. For example, human capital, job, and market analysis skills are needed to inventory agency activities; benchmarking, and strategic and workforce planning skills are needed to conduct the preliminary planning; organizational analysis, contract management and cost analysis skills are needed to conduct competitions; and financial management and oversight skills are needed in the implementation and post-competition phase. Some skills, such as labor relations and information technology, are required throughout the competitive sourcing process. Despite these additional personnel requirements, many department-level offices in the civilian agencies we reviewed have only one or two full-time staff to complete FAIR Act inventories, interpret new laws and regulations, and oversee agency selection of positions to compete and the competitions. Officials at the six civilian agencies we reviewed stated it would be helpful to have additional personnel well versed in the use of Circular A-76. Even DOD, the leader among federal agencies in competitive sourcing and A-76, may face human capital challenges in running its competition program. According to a cognizant Army competitive sourcing official who has analyzed this issue, the Army’s implementation of the revised Circular A-76 will require approximately 100 to 150 additional personnel, including attorneys, human resources specialists, and contracting officials. A final determination on Army staffing requirements and capabilities has not been made. As we reported in June 2003, building the capacity to conduct competitions as fairly, effectively, and efficiently as possible will likely be a challenge for all agencies, but particularly those that have not previously been invested in competitive sourcing. The Commercial Activities Panel also recognized in its recommendations that accurate cost comparisons, accountability, and fairness would require high-level commitment from leadership; adequate, sustained attention and resources; and technical and other assistance in structuring the MEO, as well as centralized teams of trained personnel to conduct the cost comparisons. According to officials of the Federal Acquisition Council, its competitive sourcing working group is now inventorying agency resources, skill sets and training needs required to address this challenge. At the same time, agencies we reviewed are challenged to maintain employee morale. While most agencies have established vehicles for communicating their competitive sourcing goals internally—such as work groups and Web sites—officials from OMB report that it is difficult to convince employees that the current competitive sourcing program is designed to create value and improve efficiency, not to reduce the size of the federal workforce—as was the case with past competitive sourcing efforts. Funding their competitive sourcing programs also has been cited as a challenge for agencies. Officials in some of the agencies we reviewed cited limited funding as a barrier to implementing their competitive sourcing programs. Such program costs can be significant—at both the department and agency levels. For example, USDA reported spending a total of $36.3 million in fiscal years 2002 and 2003 on its competitive sourcing program. The Forest Service, part of USDA, accounted for $18.7 million of USDA’s $36.3 million on competitive sourcing. In fiscal year 2003, NIH reported spending approximately $3.5 million on contract support for two competitions involving more than 1,400 positions. The National Park Services’ financial needs prompted the agency to ask the Congress for permission to reprogram $1.1 million to help pay for its competitive sourcing program. Other agency officials stated that funding to finance their competitive sourcing initiatives was taken from other agency activities. As we have previously reported, DOD has also been challenged to ensure adequate funding for implementing competitive sourcing under Circular A-76. Finally, in August 2003, the Department of Veterans Affairs terminated all competitive sourcing studies as its General Counsel determined that the prohibition regarding funds from the three health care appropriation accounts under 38 U.S. C. 8110 (a)(5) is applicable. According to officials from most of the agencies we reviewed, they have funded their competitive sourcing programs using existing funds. However, some officials told us that OMB recently instructed their agencies to include a line item in their fiscal year 2005 budget request for their competitive sourcing programs. Doing so should provide agencies with a more stable fiscal environment in which to plan and conduct competitions. Several agencies have developed strategic and transparent competitive sourcing approaches by integrating their strategic and human capital plans with their competitive sourcing plans—an approach encouraged by the Commercial Activities Panel. These approaches have gone beyond the requirement to identify positions for competition as called for in OMB’s initial FTE targets. These approaches employ broader functional assessments of FAIR Act inventories and more comprehensive analysis of factors such as mission impact, potential savings, risks, current level of efficiency, market conditions, and current and projected workforce profiles. Not only do these agencies’ processes identify viable activities for competition, they also provide greater transparency in this critical part of the process. Some of these approaches are summarized below. Appendix VIII contains a more detailed discussion of these approaches. While it is too early to tell whether the various agencies’ approaches will be effective, a key to success will be learning from them and adapting them to each agency’s unique circumstances. OMB has recognized the challenges that agencies have faced in implementing their competitive sourcing programs and recently publicly endorsed agencies’ use of a more strategic approach to competitive sourcing. For example, OMB supported the innovative approaches some agencies have taken to ensure sound planning and effective use of public-private competitions. OMB further stated that consulting with program, human resources, acquisition, budget, and legal professionals facilitates effective communication and a broad-based understanding of competitive sourcing actions within the agency. Officials from HHS’ National Institutes of Health told us they used a steering committee of senior-level officials to determine the activities to be competed under its competitive sourcing program. This committee used a systematic approach that considered FAIR Act inventory data, the knowledge and experience of program managers, and a decision support software application to capture the judgments of managers familiar with the commercial activity under study. The software application used a set of evaluation questions that assessed a function regarding NIH’s mission, human capital and risk, and recorded and scored managers’ responses. Committee officials then reviewed the scores produced by the software, considering factors such as (1) the activity’s impact on NIH’s mission, (2) costs, (3) socioeconomic impacts, and (4) potential advantages to competing the activity. NIH officials also stated that once a decision has been made to compete an activity, consideration is given to re-engineering the applicable business process, whether the activity remains in-house or undergoes a public-private competition. Officials from the Internal Revenue Service, a bureau of the Department of the Treasury, told us they used business case analysis and an enterprisewide approach to determine if a commercial function has the potential to create significant business process improvements and a sizable return on investment. The business case analysis, which is completed in approximately 4 to 6 months, calculates the economic benefits of potential alternatives based on IRS responses to critical questions such as: Is the function core to the mission? What does the function cost? Is there potential to reduce cost and/or improve productivity by competing the function? How does the function fit into other current or planned strategic projects? An IRS competitive sourcing official cited several benefits from the business case approach used during the planning stage up-front consideration of major decision variables such as economics, market research, and risk; involvement of top-level management and leadership; the ability to test candidate projects against strategic goals and performance improvement objectives; and low investment of resources to qualify or reject an activity as a competitive sourcing project. The Army’s “core, non-core concept” for assessing functions employed a more strategic approach. Initially, the Army’s approach for classifying positions for its inventory focused on determining whether functions were core or non-core to the agency’s mission. However, the Army found that such a distinction did not, by itself, provide a good basis for a decision, and that other factors, such as risk and operational considerations, also must be considered. A cognizant Army official told us that focusing on positions does not consider how well the function is being performed or who should perform the function—military, civilian, contractor, or some combination of these. In contrast, the Army learned that looking at broader functional areas, such as utilities and family housing, as opposed to positions, should allow them to better identify potential positions for competition. For example, functions such as childcare and equal employment opportunity operations, while not inherently governmental, are exempt from competitive sourcing because they are important for reasons such as military morale and quality of life. According to a DOD competitive sourcing official, the Army’s approach is evolving and is unique within DOD. Officials at four civilian agencies in our review expressed similar concerns that the Army official expressed on developing their inventories. Officials told us that given the investment of time and resources required to develop an inventory, agencies should focus on mission-related functions rather than individual positions. The Department of Education’s “One-ED” initiative also used strategic approaches in identifying candidates for competition. One-ED covers all elements of major departmental operations, and seeks management changes through integrated human capital reform, competitive sourcing, and organizational restructuring. As part of its broader approach, the department developed its FAIR Act inventory by analyzing key processes in the activities under consideration. It then used the results of this process to restructure positions as either commercial or inherently governmental and frame a broader analysis of the function’s activities. The ultimate success of the administration’s competitive sourcing initiative hinges on the extent to which agencies achieve the efficiencies, innovation, and improved performance envisioned by the President’s Management Agenda. Successful implementation of this initiative requires results-oriented goals and strategies; clear criteria and analysis to support agency decisions; and adequate resources. OMB, in its leadership role, has a difficult task in guiding this initiative and must balance the need for transparency and consistency with the flexibility agencies need in implementing significant changes to operations. While OMB is addressing the funding and human capital challenges that agencies face, it needs to ensure that the FAIR Act inventory and goal-setting process is more strategic and helpful to agencies in carrying out their competitive sourcing responsibilities. Recognizing that agency missions, organizational structures, and workforce composition vary widely, the Commercial Activities Panel provided a framework of sourcing principles that provide an implementation roadmap for this initiative. However, OMB’s current emphasis on meeting implementation milestones and targets does not fully align with these principles or ensure achievement of the ultimate goal of increasing efficiency and improving the performance of commercial activities. OMB needs to work with agencies to ensure their long-range plans are strategically focused. A more strategic approach focused on achieving improvement outcomes would help focus agency efforts and better achieve the results envisioned at the outset of the competitive sourcing initiative. To complement efforts already underway that address funding and human capital challenges and to help agencies realize the potential benefits of competitive sourcing and ensure greater transparency and accountability, we recommend that the Director of OMB take the following three actions: ensure greater consistency in the classification of positions as commercial or inherently governmental when positions contain a mix of commercial and inherently governmental tasks by reviewing current guidance and developing additional guidelines, as necessary, for agencies and OMB examiners; work with agencies to ensure they are more strategic in their sourcing decisions and are identifying broader functional areas and/or enterprisewide activities, as appropriate, for possible public-private competition; and require agencies to develop competition plans that focus on achieving measurable efficiency and performance improvement outcomes. We provided a draft of this report to OMB and the seven agencies for their review and comment. OMB provided oral comments concurring with our three recommendations, but disagreed with our conclusion that OMB’s recent guidance on competitive sourcing emphasized process more than results. Based on our review of the factors OMB considers in its review of agency plans, we continue to believe that factors such as the agency’s ability to conduct competitions are emphasized more than results such as expected savings and the potential for improved performance as called for in the President’s Management Agenda. On the first recommendation, OMB officials concurred that there needs to be consistency in the classification of positions and stated that OMB will review its current guidance in light of the findings in this report to determine how best to help agencies that have had difficulties in classifying their activities. OMB officials stated that they would consider additional guidelines as necessary. OMB officials, while agreeing with the second and third recommendations, emphasized that long-range “green” plans are intended to ensure that agencies think strategically in choosing activities for review and routinely take into account the type of factors that will ensure successful application of competition. OMB reiterated that before an agency may receive a green score on the President’s Management Agenda scorecard, the agency must have an approved green competition plan. OMB stated that its evaluation of plans will not be one-dimensional, but instead will account for each agency’s unique mission and workforce needs and demonstrated ability to conduct reviews in a reasonable and responsible manner. OMB will also review agency plans to understand how the agency has selected activities and their potential for savings and performance improvements. However, while OMB’s guidance mentions the importance of improving the cost effectiveness and quality of commercial operations, we note that the guidance does not cite the potential for savings or improved performance as factors OMB will look for when reviewing agency green plans. The Department of Agriculture and the Department of the Interior concurred with our report. The Department of the Treasury stated that the report’s recommendations were timely. The Department of Education and DOD did not have any comments. The Department of the Interior, HHS, OMB and VA provided technical comments, which were incorporated as appropriate. We are sending copies of this report to other interested congressional committees; the Director, Office of Management and Budget; the Administrator, Office of Federal Procurement Policy; and the Secretaries of Agriculture, Defense, Education, Health and Human Services, the Interior, the Treasury, and Veterans Affairs. We also will provide copies to others on request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-4841 or John K. Needham at (202) 512-5274. Other major contributors to this report were Robert L. Ackley, Christina M. Cromley, Thomas A. Flaherty, Rosa M. Johnson, Nancy T. Lively, William M. McPhail, Karen M. Sloan, Marilyn K. Wasleski, and Anthony J. Wysocki. To describe the progress DOD and the civilian agencies have made in establishing the competitive sourcing program in response to the President’s Management Agenda, we interviewed officials at the Department of Agriculture; DOD; and the Departments of Education, Health and Human Services, the Interior, the Treasury, and Veterans Affairs. We selected the agencies based on the number of commercial positions in their 2001 FAIR Act inventories. The agencies selected represent 84 percent of the 2002 FAIR Act inventory of commercial positions among the 26 executive branch agencies implementing the President’s Management Agenda. We selected the Department of Education because OMB highlighted its unique approach to implementing the competitive sourcing initiative. We obtained and reviewed pertinent documents from the seven government agencies. We also met with members of the Civilian Agency Competitive Sourcing Working Group, executive members of the Federal Acquisition Council and its Working Group on Competitive Sourcing, and attended several competitive sourcing conferences and workshops. We reviewed statutes and circulars governing this program and reports on competitive sourcing. We also reviewed reports on related subjects such as human capital, costs, and savings that were issued by academic and independent research organizations. To identify what, if any, challenges exist for the agencies in implementing the competitive sourcing initiative, we interviewed senior-level officials at the seven competitive sourcing programs. In identifying the challenges agencies face, we also reviewed OMB and agency guidance as well as criteria and data used to develop inventories and select the activities to study and compete. We discussed management expertise, training requirements, planned contract support and contract oversight, timeline and budget impact to achieve fiscal year 2003 goals as well as intra-agency interactions, including budget and human resources offices. To identify strategies agencies are using to identify activities for competition, we discussed extensively the alternatives and strategies agencies used to take a more strategic approach and obtained contractor documents, if available. These studies, conducted in support of a “compete/no compete” decision, gave us insight regarding decision criteria, competitive sourcing strategies, and costs involved. We did not evaluate savings from completed competitions during this review as the program is new and such data are limited. The FAIR Act inventory data used in this report have been reviewed by OMB, reported to Congress, and made available to the public and covers the years 2000, 2001, and 2002. We did not independently verify this information. OMB-reviewed data for 2003 were not available for all agencies at the time of our review. We performed our review between April and December 2003 in accordance with generally accepted government auditing standards. In 2000, Congress enacted legislation creating the Commercial Activities Panel and mandating a study of the government’s competitive sourcing process. The Commercial Activities Panel’s mission was to devise a set of recommendations that would improve the government’s sourcing framework and processes so that they would reflect a balance among the taxpayer interests, government needs, employee rights, and contractor concerns. In April 2002, the panel released its report with recommendations that included 10 sourcing principles to guide federal sourcing policy. The panel believed that federal sourcing policy should support agency missions, goals, and objectives; be consistent with human capital practices designed to attract, motivate, retain, and reward a high-performing federal workforce; recognize that inherently governmental functions and certain others should be performed by federal workers; create incentives and processes that foster high-performing, efficient, and effective organizations throughout the federal government; be based on a clear, transparent, and consistently applied process; avoid arbitrary FTE or other arbitrary numerical goals; establish a process that, for activities that may be competitively sourced, would permit public and private sources to participate in competitions for work currently performed in-house and work currently contracted to the private sector as well as new work; ensure that competitions are conducted fairly, effectively, and efficiently; ensure that competitions involve a process that considers both quality and provide for accountability in all sourcing decisions. Appendix III: 2002 FAIR Act Inventories According to DOD, these FAIR Act inventory numbers do not include military, foreign nationals. depot-level maintenance and repair commercial activities, DOD Inspector General, civilian performance of any commercial activities that have already been contracted out, and the DOD intelligence community. Results of Completed Studies (FTEs) Positions Studied (FTEs) Interior provided only aggregated data for 2002 and 2003. Over this 2-year period, 2,483 FTEs were studied. Of those FTEs, 968 remained in-house and 1,515 were contracted out. Data represent bureaus remaining after transfer made to the Department of Homeland Security. Actions on 3,449 FTEs are underway; some are in the planning stage, while others await senior management approval before results are announced. Management Services. This study began in 1999, competition was announced in 2001, and the contract was awarded in August 2003. In addition, VA did not initiate any studies in 2002. In addition, VA did not initiate any studies in 2002. This activity had 270 FTEs at the time the study was announced in 1999. The Most Efficient Organization provided for 120 FTEs if the work was retained in-house. VA awarded the contract to the private sector in 2003. considered inherently governmental. The first submission of inventory data was 1999. Directed agencies to also submit a separate report listing their inherently governmental positions. Directed agencies to provide a single inventory submission that reflects both the agency’s inherently governmental FTE positions and its commercial FTE positions. Once reviewed by OMB, agencies must provide a listing of their commercial FTE positions to the Congress and the public. Instructed agencies that they should anticipate the possibility that after their list of inherently governmental positions has been reviewed, it too may be released to the public. Directed agencies to submit their FAIR Act inventory in two parts—(1) a list of commercial activities performed by FTE civilian personnel and (2) a list of inherently governmental activities performed by FTE civilian personnel. After OMB reviews these lists, both will be released to the Congress and the public. FTE civilian personnel. After OMB reviews these lists, both will be released to the Congress and the public. Instructed agencies in developing their 2003 inventories to justify in writing all commercial positions that they consider as not being appropriate for private sector performance. Provided guidance for preparing inventories, directs agencies to annually submit inventories of (1) their commercial activities performed by government personnel, (2) inherently governmental activities performed by government personnel and (3) a summary report that identifies aggregate commercial and inherently governmental inventory data. (Contained in revised Circular A-76) Instructed agencies to justify in writing all inherently governmental positions and all commercial positions classified as not appropriate for private sector performance. (Contained in revised Circular A-76) Appendix VII: OMB Scorecard Criteria for the Competitive Sourcing Initiative an OMB approved “yellow” competition plan to compete an OMB approved “green” competition plan to compete commercial activities available for competition; commercial activities available for competition; completed one standard competition or publicly announced standard competitions that exceed the number of positions identified for competition in the agency’s yellow competition plan; publicly announced standard competitions in accordance with the schedule outlined in the agency “green” competition plan; since January 2001, completed at least 10 competitions (no minimum number of positions required per competition); in the past two quarters, completed 75% of streamlined competitions in a 90-day timeframe; and in the past year, completed 90% of all standard competitions in a 12-month time frame; in the past two quarters, canceled less than 20% of publicly announced standard and streamlined competitions. in the past year, completed 95% of all streamlined competitions in a 90-day timeframe; in the past year, canceled fewer than 10% of publicly announced standard and streamlined competitions; and OMB-approved justifications for all categories of commercial activities exempt from competition. Several agencies used approaches that considered and balanced multiple agency interests within the competitive sourcing environment. The following discussion provides a more detailed description of these approaches. NIH has developed a more strategic competitive sourcing approach that includes use of software and integration of the agency’s human capital and strategic plans. According to NIH officials, in 2002, NIH appointed a Commercial Activities Steering Committee, comprised of 14 senior level officials, to work with NIH’s 27 centers to determine the activities to be competed under its competitive sourcing program. The committee used FAIR Act inventory data, knowledge and experience, and a decision support software application that provides objective and analytical results. The software, enabled managers to respond to NIH-developed questions related to mission effectiveness, human capital, as well as demand and risk. Percent of staff in function who are true for function (i.e. stability of demand) Openness of staff in the function toward sensitive information if the function were outsourced. The software assigns weights to each response—using NIH-developed values—and generates scores for each activity under study. Committee officials then review the scores, considering factors such as (1) the activity’s impact on NIH’s mission, (2) costs, (3) socioeconomic impacts, and (4) potential advantages to competing the activity. NIH officials also stated that once a decision has been made to compete an activity, consideration might be given to re-engineering the applicable business process, whether it remains in-house or undergoes a public-private competition. Once the Steering Committee has made its competitive sourcing decision, the Commercial Activities Review Team, with contractor assistance, implements the committee’s decisions. Further, in an effort to add rigor to its competitive sourcing process, NIH in a recent competition used a contractor to mitigate potential risks. NIH convened a panel of nine experts from the Georgia Institute of Technology to analyze and evaluate a request for proposal and its related performance work statement concerning real estate property management services at six installations—the estimated value of which exceed $100 million each year. In light of the risks it could encounter if the contract were deficient from a scope, technical, business, and/or legal standpoint, NIH asked the panel to review the request for proposal developed in-house and determine whether or not the contract documents were properly conceived, logically organized, clearly written, and sufficiently complete and accurate. As a result of its analysis, the panel identified several areas where the request for proposal and performance work statement subjected NIH to risks. NIH officials reviewed the risk and made appropriate changes to these documents. Finally, NIH officials sought advice and coordinated with HHS’ Office of Strategic Management and Planning and Human Capital Office to link their competitive sourcing program to HHS’ strategic and human capital plans. According to an IRS official, IRS, a bureau within the Department of the Treasury, developed a strategic approach to competitive sourcing, using a business case analysis methodology used by leading industry firms to determine if commercial function(s) within a business division have the potential to create significant business process improvements along with a sizeable return on investment. Based on the results of the business case analyses, the Strategy and Resources Committee, headed by the Deputy Commissioner of Operations and Support decide to compete (public- private competition) or not compete the functions. According to IRS officials, this process enhances the opportunities to make smart business decisions aligned and supportive of the IRS Strategic Business Plan. IRS has focused its competitive sourcing efforts primarily on more strategic and enterprise-wide competitions because it has determined that this approach makes more economic sense than identifying candidates in smaller groups. The official stated that the IRS’s initial step for identifying the functions that will be considered to undergo a business case analysis is its review of the FAIR Act inventory, which has been merged with IRS personnel staffing database in a software application. This application, unique in terms of the agencies that we reviewed, crosswalks the FAIR Act inventory data with personnel staffing data to provide a comprehensive analysis of the various commercial function groupings across the IRS. After identifying these groupings, the bureau’s subject matter experts and high-level managers along with hired contractors conduct business case analyses of these positions. As we reported, the business case analyses, which are completed in approximately 4 to 6 months, calculate the economic benefits of potential alternatives based on IRS responses to a number of critical questions: Is the function core to the mission? How much does the function cost? Is there potential to reduce cost and/or improve productivity by competing the function? How does the function fit into other current or planned strategic projects? Based on the responses to these questions, and analyses of current operations, market research and an MEO design, IRS calculates and considers the economic benefits of each potential alternative and the upfront and recurring investments required to achieve and maintain efficiencies. IRS then makes a decision to compete or not compete based on weighted values assigned to IRS strategic business alignment, investment risks, return on investment, FAIR Act goal alignment, and alignment with President’s Management Agenda goals. A key success factor to this approach is an expert validation of the assumptions used in the business case as well as the inclusion of significant direct and indirect costs associated with the function. According to an IRS official, if competing a function makes the best business sense, IRS appoints a team leader who selects a team and obtains contractor support to plan and develop the performance work statement. Throughout the entire business case analysis and competitive sourcing lifecycle, the IRS Office of Competitive Sourcing is engaged and provides support to the various teams. Officials from IRS’ competitive sourcing program cited many benefits from the business case approach used during the preliminary planning stage: up-front consideration of major decision variables such as economics, market research and risk; involvement of top level management and leadership at the very early stages of the process; an opportunity to test candidate projects against strategic goals and performance improvement objectives; and a low investment requirement to qualify or reject an activity as a competitive sourcing project. According to an IRS official, while the time and cost to make a decision to compete or not to compete may seem excessive, once IRS conducts a public - private competition, they have confidence in the business case projected return-on- investment and an understanding of why conducting a particular set of business functions fits into the IRS strategic plan for business improvements and human capital goals. The Army’s experience in using a strategic approach to classify positions offers lessons for other agencies in identifying positions for competitive sourcing studies. The Army’s attempt to focus on determining whether functions were core or non-core to the agency’s mission found that such a distinction did not, by itself, adequately inform sourcing decisions. For example, the Army’s core competency review showed that designating a function as “core” does not necessarily mean that in-house employees should perform a function or necessarily preclude competitive sourcing of the function. As we reported, Army officials found that other factors, such as risk and operational considerations, must also be considered. The Army’s effort assumed that all commercial positions were non-core to its mission and thus potential candidates for performance by the private sector or other government agencies. However, Army officials recognized that, in many instances, these “non-core” functions would require additional analysis to determine potential risks if the function were contracted. There are four categories of risk analysis: force management, operational, future challenges, and institutional. For example, Army officials determined that many medical functions, which are not classified as inherently governmental, could be considered core in some circumstances. Although medical functions typically do not require unique military knowledge or skills, medical activities in theater need to be performed by in-house personnel because contracting for medical support in host nations could present significant risk to U.S. armed forces. Consequently, the Army has determined that the in-theater medical mission is a critical element of the Army’s ability to accomplish its core competencies. Other medical functions could be considered both core and non-core. For example, optical fabrication—the ability to produce replacement spectacles and protective mask inserts—is considered a core competency in support of the operational forces close to the point of need in the area of engagement. However, the same function performed in the United States is not core. The Army also determined that its casualty and mortuary affairs function is not a core or an inherently governmental function. However, national policy dictates that Army officials notify families of a casualty in-person. In June 2002, the Department of Education launched, with OMB approval, an ambitious management reform known as the “One-ED” concept. One-ED seeks to transform departmental operations through the integration of human capital reform, competitive sourcing, and organizational restructuring. As part of its One-ED approach the Department developed its FAIR Act inventory by first analyzing key processes. It then used the results of this process to restructure positions as either commercial or inherently governmental. As a result of this process, Education’s reported inventory data have changed significantly in the past few years, and according to senior officials, the data are now more accurate and concise. One-ED reviews cover selected elements of major departmental operations and are being implemented in four phases over a period of three years. In each phase, the Department (1) identifies specific business functions for review, (2) conducts a business case analysis of each function, and (3) decides whether to re-engineer the function or compete it with the private sector. Phase I, which concluded in mid-2003, focused on agency-wide support functions, such as human resources, payment processing, and legal review. As a result, five agency-wide support functions will be competed with the private sector and four will be re-engineered and retained in-house. In making this decision, nine teams—comprised of approximately sixty employees knowledgeable about the function being studied and assisted by contractor personnel trained in developing business case analyses reviewed the functions and reported their findings to senior management. These teams considered such factors as the skill sets and competencies required to perform the functions being reviewed, potential risks associated with outsourcing the position, and relationship of the business function to the Department’s strategic planning. An Executive Management Team—chaired by the Deputy Secretary and staffed by senior Department officials—made the final determination using the information developed by the teams as well as other data. The Department initiated four standard competitions and one streamlined competition in fiscal year 2003. In addition, the Department is in the process of implementing proposals related to those business functions that were identified for in-house re-engineering. These projects were not completed at the time of our review. The Department’s Office of Inspector General will report on its assessment on the implementation of the One-ED initiative in early 2004.
In August 2001, the administration announced competitive sourcing as one of five initiatives in the President's Management Agenda. Under competitive sourcing, federal agencies open their commercial activities to competition among public and private sector sources. While competitive sourcing is expected to encourage innovation and improve efficiency and performance, it represents a major management change for most agencies. This report describes the progress selected agencies have made in establishing a competitive sourcing program, identifies major challenges these agencies are facing, and discusses strategies they are using to select activities for competition. Since the President announced competitive sourcing as a governmentwide initiative, the six civilian agencies GAO reviewed created a basic infrastructure for their competitive sourcing programs, including establishing offices, appointing officials, hiring staff and consultants, issuing guidance, and conducting training. With infrastructures in place and leadership involvement, each agency has developed competitive sourcing plans and conducted some competitions. The Department of Defense (DOD) has had an extensive competitive sourcing program since the mid-1990s. Interagency forums for sharing competitive sourcing information also have been established. While such activities are underway, each agency GAO reviewed, including DOD, cited several significant challenges in achieving its competitive sourcing goals. Key among these is maintaining workforce inventories that distinguish inherently governmental positions from commercial positions--a prerequisite to identifying potential positions to compete. Agencies also have been challenged to develop competitive sourcing approaches that would improve efficiency, in part because agencies have focused more on following OMB guidance on the number of positions to compete--not on achieving savings and improving performance. Ensuring adequate personnel with the skills needed to run a competitive sourcing program also challenged agencies. Many civilian department-level offices have only one or two full-time staff to interpret new laws, implement new OMB guidance, maintain inventories of competable positions and activities, and oversee agency competitions. The Federal Acquisition Council is currently identifying agency staffing needs to address this challenge. Finally, some of the civilian agencies we reviewed reported funding challenges in implementing their competitive sourcing programs. OMB told agencies to include a line item for competitive sourcing activities in their fiscal year 2005 budget requests. Several agencies integrated their strategic, human capital, and competitive sourcing plans--an approach encouraged by the Commercial Activities Panel, which was convened to conduct a congressionally mandated study of the competitive sourcing process. For example, the Internal Revenue Service (IRS) used business case analyses to assess the economic benefits of various sourcing alternatives. An IRS official said this approach required minimal investment to determine an activity's suitability for competitive sourcing. The National Institutes of Health, the Army, and the Department of Education also took a strategic approach to competitive sourcing. OMB's task in balancing the need for transparency and consistency with the flexibility agencies need is not an easy one. While OMB is addressing funding and human capital challenges, it needs to do more to assure that the agencies' inventories of commercial positions and goal-setting processes are more strategic and helpful to agencies in achieving savings and improving performance.
You are an expert at summarizing long articles. Proceed to summarize the following text: Established in 1965, HUD is the principal federal agency responsible for programs in four areas—housing assistance, community development, housing finance, and regulatory issues related to areas such as lead-based paint abatement and fair housing. To carry out its many responsibilities, HUD was staffed by 9,885 employees as of January 1998. Housing Assistance: HUD provides (1) public housing assistance through allocations to public housing authorities and (2) private-market housing assistance through rental subsidies for properties, referred to as project-based assistance, or for tenants, known as tenant-based assistance. In contrast to entitlement programs, which provide benefits to all who qualify, the benefits of HUD’s housing assistance programs are limited by budgetary constraints to only about one-fourth of those who are eligible. Community Development: Primarily through grants to states, large metropolitan areas called entitlement areas, small cities, towns, and counties, HUD provides funds for local economic development, housing development, and assistance to the homeless. The funding for some programs, such as those for the homeless, may also be distributed directly to nonprofit groups and organizations. Housing Finance: The Federal Housing Administration (FHA) insures lenders—including mortgage banks, commercial banks, savings banks, and savings and loan associations—against losses on mortgages for single-family properties, multifamily properties, and other facilities. The Government National Mortgage Association, a government-owned corporation within HUD, guarantees investors the timely payment of principal and interest on securities issued by lenders of FHA-insured and VA- and Rural Housing Service-guaranteed loans. Regulatory Issues: HUD is responsible for regulating interstate land sales, home mortgage settlement services, manufactured housing, lead-based paint abatement, and home mortgage disclosures. HUD also supports fair housing programs and is partially responsible for enforcing federal fair housing laws. HUD’s programs are supported through annual appropriations (discretionary budget authority) that are subject to discretionary spending limits under the Budget Enforcement Act, as amended. For fiscal year 1999, HUD requested about $25 billion in discretionary budget authority, which, in combination with available budget authority from prior years, will help support about $33.2 billion in outlays. This request represents a 4-percent increase in budget authority and a negligible increase in estimated outlays over fiscal year 1998. As we reported in February 1998, accurate budget estimates are essential for federal agencies to meet their fiscal responsibilities because such estimates facilitate sound policy decisions and effective funding trade-offs. Unfortunately, for years HUD had difficulty submitting accurate budget estimates. Recognizing the need to improve its budget process with better oversight and documentation, HUD has developed and begun implementing corrective actions. HUD recently placed all departmental budget operations under the Office of the Chief Financial Officer (CFO) to ensure that budgeting is integrated with financial management oversight. In the past, HUD’s budget operations have been fragmented and disjointed, preventing clear accountability and necessary coordination. This problem was the result of the CFO’s inability to link budgeting with strategic planning and financial management, according to HUD’s Management Reform Plan. As another improvement, HUD is hiring a chief financial officer for all program divisions to mirror the operations of the Department’s Office of the CFO. Previously, the program division’s budget director and comptroller reported to a deputy assistant secretary. Under the new structure, the division’s budget director and comptroller will report to the division’s CFO, who will coordinate with the agency’s CFO and the division’s program staff to ensure adequate oversight. In addition to organizational changes, the Office of the CFO plans to develop budget estimating policies and procedures that build in enough time for adequate coordination, oversight, and communication. However, HUD did not implement many of the changes in time to affect the Department’s fiscal year 1999 budget estimate. According to HUD’s Director of the Office of Budget, time constraints prevented his office from performing analytical reviews of program office submissions. He said that his Office was limited to reviewing the fiscal year 1999 budget estimates for numerical accuracy and that this Office could not always question the estimates’ reasonableness or underlying basis. He believes that the planned improvements should be operational in time for HUD’s fiscal year 2000 budget submission. HUD has significantly improved its budgeting for tenant-based contract renewals by omitting duplicative contingency allowances and accounting for excess budget authority in the Section 8 certificate and voucher programs. However, HUD’s request for $4.7 billion to renew Section 8 tenant-based contracts could still be reduced by the amount of excess budget authority in HUD’s moderate rehabilitation program, or $439 million. In addition, because excess budget authority exists in the moderate rehabilitation program, HUD may not need the $70 million it has requested for moderate rehabilitation amendments. In contrast to the Department’s Section 8 tenant-based contract renewal request for fiscal year 1998, HUD’s request for fiscal year 1999 does not appear to contain any duplicative contingency factors. Instead, HUD’s improved budget estimate for fiscal year 1999 is based on actual expenditure data adjusted for inflation. In addition, HUD used $3.7 billion of excess budget authority recaptured from the Section 8 tenant-based program to offset the cost of contract renewals in fiscal year 1999. Although HUD has improved its budget-estimating process, we believe that the Department is still overestimating its need for contract renewal funding because $439 million in excess budget authority in the moderate rehabilitation program could be used to renew expiring contracts in lieu of requesting new budget authority. As shown in table 1, HUD determined in January 1998, that the gross excess budget authority in the Section 8 moderate rehabilitation program was about $814 million. Of that amount, the Department estimates that it will need $191 million to meet funding shortfalls in the program and $184 million to cover contingencies, such as decreases in tenants’ incomes or unexpected rent increases. (HUD believes, however, that it needs statutory authority from the Congress to use excess budget authority to cover some of these funding shortfalls.) The remaining $439 million is the budget authority that HUD considers to be in excess of the Section 8 program’s needs. According to HUD officials, the Department has not decided yet whether to recapture this budget authority or how much it would recapture. Although HUD did not complete its analysis of excess budget authority in the moderate rehabilitation program in time to include this amount in its initial budget submission for fiscal year 1999, we believe sufficient time remains before conference for HUD to revise its fiscal year 1999 request to reflect the $439 million in excess budget authority available to reduce the cost of renewing contracts. HUD’s January 1998 analysis also shows that a request for $70 million to amend Section 8 moderate rehabilitation contracts may not be needed. Generally, amending contracts refers to changing specific housing assistance contracts to add more funding. According to HUD officials, the $70 million was included in the budget as a placeholder until the Department completed its analysis of excess budget authority in the moderate rehabilitation program. As noted above, HUD’s analysis shows that sufficient excess funding exists in the program to make the request for $70 million unnecessary. If the Congress grants HUD the authority it believes it needs to use excess budget authority to cover funding shortfalls in the program, Department officials told us that they would not need this $70 million in amendment funding. According to HUD, the total amount of Section 8 project-based amendment funding needed for fiscal year 1999 is $1.7 billion. These total amendment needs are then reduced by a $463 million offset coming from estimated “recapture amounts”—that is, balances remaining on expiring contracts that may be recaptured and used to fund contract amendments. As a result, HUD’s fiscal year 1999 budget requests $1.3 billion for Section 8 project-based amendments. We do not believe that HUD’s request for Section 8 project-based amendment needs is adequately supported. Furthermore, HUD’s budget request for Section 8 project-based amendment funding substantially exceeds the amounts that HUD’s analyses indicated are needed. The support HUD provided to us for its amendment budget request was an analysis dated April 1997. The analysis was formulated using a methodology that HUD refers to as “leveling,” under which funding shortfalls are spread over the remaining term of the contract rather than beginning in the year the contract is projected to run out of money. For example, for a contract costing $1 million a year with 10 years remaining and $9 million available, the $1 million shortfall would be spread in $100,000 increments over the next 10 years, rather than being identified as a shortfall of $1 million in the 10th year. HUD officials told us the goal of the leveling methodology is to enable the Department to request a consistent annual amount to fund amendments and to avoid requesting large amounts in later years. The April 1997 analysis, derived from the Budget Forecast System, which the Department uses to estimate its Section 8 amendment needs for budgeting purposes, indicates a total amendment need of $1.2 billion in fiscal year 1999, or about $500 million less than the total amount identified in HUD’s budget request for fiscal year 1999. HUD officials said that this additional $500 million reflects a policy decision by the Office of Management and Budget and HUD to augment the request because of the long-term funding need for amendments. In addition, an analysis of Section 8 project-based amendment needs that we obtained from HUD in February 1998 shows that substantially higher amounts of recapture funds are projected to become available in the next several years than those that are reflected in the budget. HUD prepared this analysis at our request to address problems that we identified in HUD’s previous analyses of its Section 8 amendment needs. Among other things, we found the previous analyses did not include Section 8 project-based funding that HUD received in its fiscal year 1997 appropriation and erroneously excluded about 1,800 Section 8 contracts. The February 1998 analysis indicates that $2.6 billion in recaptures are projected to become available in fiscal year 1998, compared with the $463 million recapture amount used to offset the 1999 budget request. We are currently reviewing this analysis as part of the work we have underway examining HUD’s unexpended Section 8 project-based balances. We plan to issue our report in July 1998. For fiscal year 1999, HUD is requesting $958 million to fund its ongoing programs for the homeless and $192 million for 34,000 new Section 8 vouchers for homeless individuals or families. Although congressional concern exists about the proportion of homeless funding spent on supportive services as compared to the amount spent on direct housing assistance, HUD’s request for 34,000 new vouchers for the homeless would increase housing assistance for the homeless. However, HUD has not developed the eligibility standards or other planning criteria for these vouchers that would facilitate program delivery. Congressional concern has been expressed about the proportion of HUD’s homeless funding that is used for supportive services compared to housing assistance. Moreover, a House bill introduced in 1997 proposed placing a cap on the percentage of total funding that grantees can use for services for the homeless. In fiscal year 1996, the latest year for which HUD has detailed information on the allocation of its homeless assistance funds, 51 percent of the competitive funding HUD awarded to grantees was spent on supportive services as opposed to direct housing assistance. Table 2 shows the breakdown between services and housing for three of HUD’s competitive homeless assistance programs. HUD officials explained that they award the grants on the basis of the level of demand from grant applicants, and grantees’ requests for services are high compared to requests for housing. They also speculated that it is difficult for organizations to obtain needed services through other agencies, which is why they may be using HUD resources to fill the gap. HUD officials further commented that funding this need is consistent with the agency’s Continuum of Care approach that seeks to end homelessness by bringing together all parts of the community to provide a coordinated system of care for homeless men, women, and children. In commenting on a draft of this testimony, the Department said that in all instances it encourages housing as the end result. With its fiscal year 1999 budget request for $192 million to fund 34,000 new Section 8 vouchers for homeless individuals or families, HUD proposes to increase the amount of funding for direct housing assistance. These vouchers will be used to assist families that have achieved a sufficient level of independence to move to permanent housing that is linked to services. The vouchers are intended for homeless individuals and families who would otherwise have the greatest difficulty securing permanent housing resources, as determined through the approved Continuum of Care approach. However, unlike the Department’s fiscal year 1999 budget request for 50,000 new welfare-to-work vouchers, HUD’s request for new vouchers for the homeless does not describe the criteria that would be used to distribute the vouchers. For example, under HUD’s welfare-to-work voucher proposal, any housing agency requesting permission to distribute vouchers must (1) prepare a plan that includes the criteria to be used to select the recipients and (2) describe the proposed strategy for counseling tenants, providing assistance in seeking housing, and reaching out to landlords. Furthermore, the agency must determine that obtaining tenant-based housing assistance is critical for the applicant to obtain or retain employment and that the applicant is not already receiving tenant-based assistance. If HUD developed similar requirements for the recipients of vouchers for the homeless, the program’s implementation could likely begin shortly after the funding is received, strengthening the program’s efficiency. We believe that the lack of planning raises concerns about how quickly and effectively this program can be implemented. HUD, on the other hand, stated that it has a structure in place through its continuum of care grant process as well as public housing authorities that are experienced in administering the Section 8 voucher program. Nevertheless, further details by HUD on how such a program will work would be useful in any debate on expanding the housing assistance provided to the homeless. HUD’s fiscal year 1999 budget proposal includes $100 million for a new Community Development Block Grant (CDBG) set-aside—the Regional Connections Initiative (RCI). RCI is intended to help states and localities develop and implement strategic plans that address key regional issues facing the nation’s metropolitan and rural communities. HUD is planning to award grants under the program to states and localities on a competitive basis. HUD’s interest in developing a program designed to encourage and facilitate efforts to address regional issues seems justified in light of the Department’s mission. However, given that RCI is a new initiative, HUD’s budget justification does not provide enough detail to determine whether $100 million is a reasonable funding level. Moreover, the key study (still in draft form) underlying this new initiative does not recommend a significant federal effort to address regional problems because little consensus exists at the local and state levels for such an effort. In addition, the study concluded that in the future, emerging regional efforts could raise questions about the appropriate federal role. According to HUD officials, the RCI funding level was a judgment call and was considered a manageable set-aside under the CDBG program. In addition, HUD officials believe that the $100 million requested for RCI will be awarded in fiscal year 1999. However, several tasks need to be accomplished before these funds are committed, including selecting an RCI advisory board of community development experts, writing program regulations, developing a notice of funding availability, allowing applicants time to prepare their proposals, reviewing submitted applications, and deciding which applicants will receive RCI funds. To accomplish these tasks, HUD expects to use expertise from outside the Department to help design and review the RCI grant program in time to allow funds to be awarded in fiscal year 1999. Because of the tasks and coordination necessary, however, we question whether such an ambitious schedule is workable for this new initiative. As welfare reform is implemented throughout the nation, it could have implications for HUD’s future year budgets. HUD estimates that one-third of the households receiving rental assistance from HUD depend on cash welfare assistance for some or all of their income. Under welfare reform, cash assistance programs became time-limited, work-dependent, and generally less available. Because residents pay a portion of their income for rent, any reduction in cash assistance without a commensurate increase in wage income would result in reduced rental payments from tenants. Managers at most of the 18 housing agencies we visited while conducting our ongoing work expressed concern about barriers their residents face in finding employment within their states’ time limits. Under existing program regulations, reductions in tenant rental payments would increase the size of the payments that HUD makes to housing agencies and private landlords on behalf of low-income tenants to make up the difference between the tenants’ rental payments and the housing units’ operating cost or rent. While welfare reform may have a significant impact on HUD’s future year budgets, measuring the potential impact may not be possible. One reason is that the impact of welfare reform will vary from state to state and from year to year because states have differing welfare reform provisions, making the development of national estimates of the impact of welfare reform on HUD nearly impossible. In Massachusetts, for example, recipients will begin to hit the state’s time limits for cash assistance in December 1998, while in Minnesota, recipients will not reach the time limits until July 2002. A second reason is that conclusions drawn about welfare reform’s impact on recipients generally may not apply to those who also receive housing assistance because evidence suggests that welfare recipients receiving housing assistance may have greater difficulty finding and retaining employment than other welfare recipients. Furthermore, HUD does not collect the detailed data on recipients’ education, work, and welfare histories needed to assess likely outcomes for its tenants. Finally, while the general health of the economy is a major factor in the recent decline in welfare caseloads, the future course of the economy cannot be predicted with any certainty. We believe that HUD is generally moving toward more supportable budget estimates. For example, HUD recently prepared a new analysis of the Section 8 moderate rehabilitation program showing that sufficient excess budget authority exists to cover both program shortfalls and unexpected costs and still have $439 million remaining in excess budget authority. This means that a separate funding request for amendments may not be necessary. In the Section 8 project-based program, however, HUD’s budget estimate is not consistent with its analysis of amendment needs. As HUD continues to refine its analyses in these areas, the Department will have the opportunity to amend its budget estimate before the Congress votes on HUD’s appropriation bill in the fall. In addition, we found that for some new initiatives—such as vouchers for the homeless and the Regional Connections Initiative—to be effective in fiscal year 1999, HUD may need to complete appropriate and perhaps ambitious planning. The Congress may wish to consider reducing HUD’s request for Section 8 contract renewals to account for the $439 million in excess budget authority in the Section 8 moderate rehabilitation program. Because HUD has set aside excess budget authority in the moderate rehabilitation program, the Congress may also wish to consider not funding HUD’s request for $70 million to amend moderate rehabilitation contracts. Finally, the Congress may wish to seek assurances from HUD that these programs will be ready to effectively commit funds and review HUD’s $192 million request for vouchers for the homeless and its $100 million request for the Regional Connections Initiative. We provided a draft of this statement to HUD for its review and comment. The Department provided comments on several issues, including its request for new Section 8 vouchers for the homeless and its Regional Connections Initiative. In response to our concerns about the planning accomplished for these two programs, HUD said that it expects housing authorities to compete for and administer the vouchers for the homeless. For the Regional Connections Initiative, HUD said that it recognizes that a limited number of localities and states are ready and willing to participate in this effort, but that the $100 million proposed funding will still accommodate a meaningful initiative. We made appropriate changes in the statement to reflect HUD’s concern; however, we continue to believe that the quality of planning for these new efforts will be critical to their effectiveness in fiscal year 1999. The Chairman and Ranking Minority Member of the Subcommittee on VA, HUD, and Independent Agencies, Senate Committee on Appropriations, requested that we assess the reasonableness of selected aspects of HUD’s fiscal year 1999 budget request. To accomplish this task, we reviewed HUD’s February 1998 Congressional Justifications for 1999 Estimates. We also interviewed appropriate officials in HUD’s Offices of the Chief Financial Officer, Public and Indian Housing, Housing, and Community Planning and Development to obtain more information on planned uses for funding requested. When available, we reviewed this additional information. Finally, we based portions of this statement on our recently issued report on HUD’s financial management of its Section 8 tenant-based program as well as on our current work focusing on HUD’s financial management of the Section 8 moderate rehabilitation and project-based programs and the impact of welfare reform on public and assisted housing. We conducted our work in February and March 1998 in accordance with generally accepted government auditing standards. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Department of Housing and Urban Development's (HUD) fiscal year (FY) 1999 budget request, focusing on: (1) actions HUD has taken or plans to take to improve its budget estimates; (2) the reasonableness of HUD's estimate for Section 8 tenant-based assistance; (3) HUD's justification for its Section 8 project-based amendment request; (4) HUD's request for funding to assist the homeless; (5) HUD's request for $100 million to fund its new Regional Connections Initiative; and (6) the future budgetary implication of welfare reform. GAO noted that: (1) HUD recognizes the need to improve its budget estimating process with better oversight and documentation and has started to improve its process by modifying its organizational structure to increase oversight among the staff responsible for formulating budget estimates; (2) however, many of HUD's planned improvements were not implemented in time to affect HUD's FY 1999 budget estimate but, according to HUD officials, will be in place to enhance the FY 2000 process; (3) HUD's request for $4.7 billion to renew Section 8 tenant-based assisted housing contracts for FY 1999 could be reduced by $439 million; (4) this is the amount of excess budget authority in the Section 8 moderate rehabilitation program that could be used in place of new budget authority to renew expiring housing assistance contracts; (5) in addition, because this excess budget authority exists, HUD may not need the $70 million it has requested for Section 8 moderate rehabilitation amendment funding; (6) HUD's budget request for $1.3 billion in Section 8 project-based amendment funding--funds needed to cover shortfalls in long-term Section 8 contracts--substantially exceeds the amounts that HUD's analyses indicated are needed; (7) to help address the needs of the nation's homeless, HUD has requested 34,000 new Section 8 vouchers; (8) although these new vouchers will increase the amount of direct housing assistance for the homeless, HUD has not developed the eligibility standards or other planning criteria for these new vouchers that would facilitate implementing the program; (9) HUD's budget request for $100 million for the Regional Connections Initiative (RCI), a new set-aside within the Community Development Block Grant program to address key regional issues, does not provide enough detail to indicate whether this is a reasonable funding level for the program; (10) the additional support that HUD provided, however, does not recommend a significant federal effort to address regional problems; (11) nevertheless, HUD officials believe the funding level is a manageable set aside; (12) because of the work required to initiate a new program like this, GAO questions whether the funds can be awarded in FY 1999; (13) welfare reform may have a substantial future impact on HUD's spending for assisted housing for low-income households; and (14) however, estimating the impact may not be possible because the states' differing welfare reform provisions will create varied state-by-state and year-by-year impacts.
You are an expert at summarizing long articles. Proceed to summarize the following text: Intellectual property is a category of intangible rights that protect commercially valuable products of the human intellect, such as inventions; literary and artistic works; and symbols, names, images, and designs used in commerce. U.S. protection of intellectual property has a long history: Article 1 of the U.S. Constitution grants the Congress the power “to promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” Copyrights, patents, and trademarks are the most common forms of protective rights for intellectual property. Protection is granted by guaranteeing proprietors limited exclusive rights to whatever economic reward the market may provide for their creations and products. Ensuring the protection of IPR encourages the introduction of innovative products and creative works to the public. Intellectual property is an important component of the U.S. economy, and the United States is an acknowledged global leader in the creation of intellectual property. According to USTR, “Americans are the world’s leading innovators, and our ideas and intellectual property are a key ingredient to our competitiveness and prosperity.” However, industries estimate annual losses stemming from violations of intellectual property rights overseas are substantial. Further, counterfeiting of products such as pharmaceuticals and food items fuels public health and safety concerns. USTR’s Special 301 annual reports on the adequacy and effectiveness of intellectual property protection around the world demonstrate that, from a U.S. perspective, intellectual property protection is weak in developed as well as developing countries and that the willingness of countries to address intellectual property issues varies greatly. U.S. laws have been passed that address the need for strong intellectual property protection overseas and provide remedies to be applied against countries that do not provide adequate or effective protection. For example, the Omnibus Trade and Competitiveness Act of 1988 allows the U.S. government to impose trade sanctions against such countries. Eight federal agencies, the FBI, and the USPTO undertake the primary U.S. government activities to protect and enforce U.S. intellectual property rights overseas. These agencies are the Departments of Commerce, State, Justice, and Homeland Security; USTR; the Copyright Office; USAID; and USITC. The U.S. government also participates in international organizations that address intellectual property issues, such as the World Trade Organization (WTO), the World Intellectual Property Organization (WIPO), and the World Customs Organization (WCO). The efforts of multiple U.S. agencies to protect U.S. intellectual property overseas fall into three general categories—policy initiatives, training and technical assistance, and U.S. law enforcement actions. USTR leads most U.S. policy activities, in particular the Special 301 review of intellectual property protection abroad. Most agencies involved in efforts to protect U.S. IPR overseas conduct training and technical assistance activities. However, the number of agencies involved in U.S. law enforcement actions is more limited, and the nature of these activities differs from other U.S. government actions related to intellectual property protection. U.S. policy initiatives to increase intellectual property protection around the world are primarily led by USTR, in coordination with the Departments of State and Commerce, USPTO, and the Copyright Office, among other agencies. These efforts are wide ranging and include the annual Special 301 review of intellectual property protection abroad, use of trade preference programs for developing countries, negotiation of agreements that address intellectual property, and several other activities. A centerpiece of policy activities is the annual Special 301 process. “Special 301” refers to certain provisions of the Trade Act of 1974, as amended, that require USTR to annually identify foreign countries that deny adequate and effective protection of intellectual property rights or fair and equitable market access for U.S. persons who rely on intellectual property protection. USTR identifies these countries with substantial assistance from industry and U.S. agencies and publishes the results of its reviews in an annual report. Once a pool of such countries has been determined, the USTR, in coordination with numerous agencies, is required to decide which, if any, of these countries should be designated as a Priority Foreign Country (PFC). If a trading partner is identified as a PFC, USTR must decide within 30 days whether to initiate an investigation of those acts, policies, and practices that were the basis for identifying the country as a PFC. Such an investigation can lead to actions such as negotiating separate intellectual property understandings or agreements between the United States and the PFC or implementing trade sanctions by the U.S. government against the PFC if no satisfactory outcome is reached. In its annual Special 301 report, USTR also lists countries with notable but less serious intellectual property protection problems as, in order of decreasing severity, “Priority Watch List” countries and “Watch List” countries. Unlike PFCs, countries cited on these lists are not subject to automatic consideration for investigation. Between 1994 and 2004, the U.S. government designated three countries as PFCs—China, Paraguay, and Ukraine—as a result of intellectual property reviews (see table 1). China was initially designated as a PFC in 1994 owing to acute copyright piracy, trademark infringements, and poor enforcement. Paraguay was designated as a PFC in 1998 owing to high levels of piracy and counterfeiting resulting from an absence of effective enforcement, its status as a major point of transshipment for pirated or counterfeit products to other South American countries, and its inadequate IPR laws. The U.S. government negotiated separate bilateral intellectual property agreements with both countries to address these problems. These agreements are subject to annual monitoring, with progress cited in each year’s Special 301 report. Ukraine, where optical media piracy was prevalent, was designated a PFC in 2001. No mutual solution was found, and in January 2002, the U.S. government imposed trade sanctions in the form of prohibitive tariffs (100 percent) aimed at stopping $75 million worth of certain imports from Ukraine over time. These sanctions negatively affected Ukraine’s exports to the United States. U.S. data show that overall imports from Ukraine experienced a dramatic 70 percent decline from 2000 to 2003. U.S. trade data also show that U.S. imports of the items facing punitive tariffs (with one exception) declined by $57 million from 2000 to 2003. Since 2001, Ukraine has remained the sole PFC and the sanctions have remained in place. In early 2002, according to Department of State officials, Ukraine passed an optical disc licensing law—a key U.S. factor in originally designating Ukraine as a PFC. Further, the Ukrainian government reportedly closed plants that were pirating optical media products. However, the U.S. government remains concerned that the optical disc law is inadequate. Although it designated only three countries as PFCs between 1994 and 2004, the U.S. government has cited numerous countries—approximately 15 per year recently—on its Special 301 Priority Watch List. Of particular note, the European Union has been placed on this list every year since 1994, while India and Argentina have been on the list for 10 and 9 years, respectively, during that period. By virtue of membership in the WTO, the United States and other countries commit themselves not to take WTO-inconsistent unilateral action against possible trade violations involving IPR protections covered by the WTO but to instead seek recourse under the WTO’s dispute settlement system and its rules and procedures. This may impact any U.S. government decision regarding whether to retaliate against WTO members unilaterally with sanctions under the Special 301 process when those countries’ IPR problems are viewed as serious. U.S. IPR policy efforts also include use of the Generalized System of Preferences (GSP) and other trade preference programs administered by USTR. The GSP is a unilateral program intended to promote development through trade, rather than through traditional aid programs, by eliminating tariffs on certain imports from eligible developing countries. The GSP was originally authorized by the Trade Act of 1974; when it was reauthorized by the Trade and Tariff Act of 1984, new “country practice” eligibility criteria were added, including a requirement that beneficiary countries provide adequate and effective IPR protection. Petitions to withdraw GSP benefits from countries that do not meet this criterion can be filed as part of an annual GSP review and are typically filed by industry interests. Petitions are considered through an interagency process led by USTR, with input from the Departments of State and Commerce, among others. In administering the GSP program, USTR has led reviews of the IPR regimes of numerous countries and has removed benefits from some beneficiary countries because of IPR problems. Ukraine lost its GSP benefits in August 2001 (approximately 6 months before the imposition of sanctions that stemmed from Ukraine’s designation as a PFC under the Special 301 process) because of inadequate protection for optical media, and these benefits have not been reinstated. Adequate and effective IPR protection is required by other trade preference programs, including the Andean Trade Preference Act (ATPA), which provides benefits for Bolivia, Colombia, Ecuador, and Peru; the African Growth and Opportunity Act (AGOA); and the Caribbean Basin Initiative (CBI). USTR reviews IPR protection provided under these trade preference programs, and, according to USTR officials, GSP, which includes numerous developing countries, has been used more actively (in terms of reviews and actual removal of benefits) than ATPA, CBI, and AGOA. In fact, according to USTR officials, benefits have never been removed under ATPA or AGOA owing to IPR concerns. However, USTR officials emphasized that these programs and their provisions for intellectual property protection have been used effectively nevertheless. For example, one USTR official noted that in response to U.S. government concerns regarding whether Colombia was meeting ATPA eligibility criteria, the Colombian government implemented measures to, among other things, ensure the legitimate use and licensing of software by government agencies. USTR also pointed out that in Mauritius, an unresolved trademark counterfeiting concern for U.S. industry was specifically raised with the government of Mauritius as a follow-up to the annual review of the country’s eligibility for preferences under AGOA. Following bilateral discussions, this counterfeiting concern was addressed and resolved. Since 1990, the U.S. government has negotiated 25 IPR-specific agreements or understandings with foreign governments. USTR noted that USPTO and other agencies are responsible for leading negotiating efforts for such agreements (and the Copyright Office participates in negotiations as an adviser). According to USTR officials, IPR-specific agreements are sometimes negotiated in response to particular problems in certain countries and are monitored when a relevant issue arises. USTR has also negotiated an additional 23 bilateral trade agreements—primarily with countries of the former Soviet Union or Eastern Europe—that contain IPR provisions (see app. II for a listing of these agreements). In addition, the U.S. government, primarily USTR and USPTO (with input from the Copyright Office) participated actively in negotiating the WTO’s Agreement on Trade-Related Aspects of Intellectual Property (TRIPS), which came into force in 1995 and broadly governs the multilateral protection of IPR. TRIPS established new or improved standards of protection in various areas of intellectual property and provides for enforcement measures. Most of the U.S. government’s IPR-specific bilateral agreements and understandings were signed prior to the implementation of TRIPS or before the other country involved in each agreement joined, or acceded to, the WTO and was thus bound by TRIPS commitments. As a result, according to a USTR official, some U.S. bilateral agreements have become less relevant since TRIPS was implemented. One of USTR’s priorities in recent years has been negotiating free trade agreements (FTAs). Since 2000, USTR has completed negotiations for FTAs with Australia, Bahrain, Central America, Chile, Jordan, Morocco, and Singapore. According to officials at USTR, these agreements offer protection beyond that required in TRIPS, including, for example, adherence to new WIPO Internet treaties, a longer minimum time period for copyright protection, additional penalties for circumventing technological measures controlling access to copyrighted materials, transparent procedures for protection of trademarks, stronger protection for well-known marks, patent protection for plants and animals, protection against arbitrary revocation of patents, new provisions dealing with domain name disputes, and increased enforcement measures. A formal private sector advisory committee that advises the U.S. government on IPR issues has provided feedback to the U.S. government on free-trade agreement negotiations, including reports on the impact of free-trade agreements on IPR industries in the United States. The U.S. government is actively involved in the activities of the WTO, WIPO, and WCO that address IPR issues. The U.S. government participates in the WTO primarily through the efforts of the USTR offices in Washington, D.C., and Geneva and participates in WIPO activities through the Department of State’s Mission to the United Nations in Geneva and through the Copyright Office and the USPTO. The Department of Homeland Security (DHS) works with the WCO on border enforcement issues. The WTO, an international organization with 147 member states, is involved with IPR primarily through its administration of TRIPS. In addition to bringing formal TRIPS disputes to the WTO (discussed in the following section on strengthened foreign IPR laws), the U.S. government participates in the WTO’s TRIPS Council. The council, which is comprised of all WTO members, is responsible for monitoring the operation of the TRIPS agreement and can be used by members as a forum for mutual consultation about TRIPS implementation. Recently the council has addressed issues such as TRIPS and public health. A WTO IPR official stated that the U.S. government is the most active “pro-IPR” delegate during council activities. The U.S. government is also a major contributor to reviews of WTO members’ overall country trade policies; these reviews are intended to facilitate the smooth functioning of the multilateral trading system by enhancing the transparency of members’ trade policies. All WTO member countries are reviewed, and the frequency of each country’s review varies according to its share of world trade. According to a USTR official in Geneva, IPR is often a central topic of discussion during the trade policy reviews, and the U.S. government poses questions regarding a country’s compliance with TRIPS when relevant. The United States also provides input as countries take steps to accede to the WTO, and, according to the USTR official, IPR is always a primary issue during this process. As of June 2004, 26 countries were working toward WTO accession. The Department of State, the Copyright Office, and USPTO actively participate in the activities of WIPO, a specialized United Nations agency with 180 member states that promotes the use and protection of intellectual property. Of particular note, WIPO is responsible for the creation of two “Internet treaties” that entered into force in 2002. In addition, WIPO administers the 1970 Patent Cooperation Treaty (PCT), which makes it possible to seek patent protection for an invention simultaneously in each of a large number of countries by filing an “international” patent application. According to a WIPO Vice Director General, the State Department’s U.S. Mission in Geneva and USPTO work closely with WIPO, and the U.S. government has actively participated in WIPO activities and monitored the use of WIPO’s budget. The Copyright Office also participates in various activities of the WIPO General Assembly and WIPO committees and groups, including the WIPO Standing Committee on Copyright and Related Rights. USPTO has participated in WIPO efforts such as the negotiation of the Internet treaties (the Copyright Office was also involved in this effort) and also conducts joint USPTO- WIPO training events. In addition, DHS works with the WCO regarding IPR protection. DHS participates in the WCO’s IPR Strategic Group, which was developed as a joint venture with international business sponsors to help member customs administrations to improve the efficiency and effectiveness of their IPR border enforcement programs. The IPR Strategic Group meets quarterly to coordinate its activities, discuss current issues on IPR border enforcement, and advise member customs administrations regarding implementation of border measures under TRIPS. Further, a DHS official emphasized that DHS has been involved in drafting WCO model IPR legislation and strategic plans geared towards global IPR protection and otherwise helping foreign countries develop the tools necessary for effective border enforcement programs. In countries where IPR problems persist, U.S. government officials maintain a regular dialogue with foreign government representatives. In addition to the bilateral discussions that are held as a result of the Special 301 process and other specific initiatives, U.S. officials address IPR as part of regular bilateral relations. We also noted that U.S. government officials at U.S. embassies overseas take the initiative, in coordination with U.S. agencies in Washington, D.C., to pursue IPR with foreign officials. For example, according to officials at the U.S. Embassy in Moscow, the economic section holds interagency IPR coordination meetings and has met regularly with the Russian ministry responsible for IPR issues to discuss U.S. concerns. In Ukraine, State Department officials told us that they communicate regularly with the Ukraine government as part of a dialogue regarding the actions needed for the removal of Special 301 sanctions. U.S. embassies also undertake various public awareness activities and campaigns aimed at increasing support for intellectual property in the general public as well as among specific populations, such as law enforcement personnel, in foreign countries. Further, staff from the Departments of State and Commerce at U.S. embassies interact with U.S. companies overseas and work to assist them with commercial problems, including IPR concerns, and have at times raised specific industry concerns with foreign officials. Finally, a Justice official told us that during the past 2 years, Justice attorneys engaged high-level law enforcement officials in China, Brazil, and Poland in an effort to bolster coordination on cross-border IPR cases. Diplomatic efforts addressing IPR have also included actions by senior U.S. government officials. For example, a senior official at the Commerce Department met in 2004 with the Brazilian minister responsible for industrial property issues, such as patents and trademarks, to discuss collaboration and technical assistance opportunities. In China, the U.S. Ambassador places a great emphasis on IPR and has organized an interagency task force that will work to implement an IPR Action Plan. In addition, presidential-level communication regarding IPR has occurred with some countries. For instance, according to Department of State sources, the Presidents of the United States and Russia discussed IPR, among other issues, when they met in September 2003. Further, USTR officials told us that the Presidents of the United States and Paraguay had IPR as an agenda item when they met in the fall of 2003. Most of the agencies involved in efforts to promote or protect IPR overseas engage in some training or technical assistance activities. Key activities to develop and promote enhanced IPR protection in foreign countries are undertaken by the Departments of Commerce, Homeland Security, Justice, and State; the FBI; USPTO; the Copyright Office; and USAID. These agencies also participate in an IPR Training Coordination Group. Training events sponsored by U.S. agencies to promote the enforcement of intellectual property rights have included enforcement programs for foreign police and customs officials, workshops on legal reform, and joint government-industry events. According to a State Department official, U.S. government agencies, including USPTO, the Department of Commerce’s Commercial Law Development Program, the Departments of Justice and Homeland Security have conducted intellectual property training for a number of countries concerning bilateral and multilateral intellectual property commitments, including enforcement, during the past few years. For example, intellectual property training has been conducted by a number of agencies over the last year in Poland, China, Morocco, Italy, Jordan, Turkey, and Mexico. We attended a joint USPTO-WIPO training event in October 2003 in Washington, D.C., that covered U.S. and WTO patent, copyright, and trademark laws and enforcement. About 35 participants from numerous countries, ranging from supreme court judges to members of national police forces, attended the event. An official at the State Department observed that the Special 301 report is an important factor in determining training priorities. Other agency officials noted additional factors determining training priorities, including embassy input, cost, and requirements of trade and investment agreements. Although regularly sponsored by a single agency, individual training events often involve participants from other agencies and the private sector. In addition to sponsoring seminars and short-term programs, agencies sponsor longer-term programs for developing improved intellectual property protection in other countries. For example, USAID funded two multiyear programs, the first of which began in 1996, aimed at improving the intellectual property regime in Egypt through public awareness campaigns, training, and technical assistance in developing intellectual property legislation and establishing a modern patent and trademark office. USAID has also sponsored longer-term bilateral programs that are aimed at promoting biotechnology and address relevant IPR issues such as plant variety protection. Private sector officials in Brazil told us that they believed the longer-term programs sponsored by USAID elsewhere would be helpful in Brazil. In addition to USAID, other U.S. agencies that sponsor training also provide other types of technical assistance in support of intellectual property rights. For example, the Copyright Office and USPTO revise and provide comments on proposed IPR legislation. Training and technical assistance activities that focus more broadly on institution building, biotechnology, organized crime, and other law enforcement issues may also support improved intellectual property enforcement. A small number of agencies are involved in enforcing U.S. intellectual property laws. Working in an environment where counterterrorism is the central priority, the FBI and the Departments of Justice and Homeland Security take actions that include engaging in multicountry investigations involving intellectual property violations and seizing goods that violate intellectual property rights at U.S. ports of entry. In addition, the USITC is responsible for some enforcement activities involving patents and trademarks. Although officials at the FBI, DHS, and Justice have emphasized that counterterrorism is the overriding law enforcement priority, these agencies nonetheless undertake IPR investigations that involve foreign connections. For example, the Department of Justice has an office that directly addresses international IPR problems. Justice has been involved with international investigation and prosecution efforts and, according to a Justice official, has become more aggressive in recent years. For example, Justice and the FBI recently coordinated an undercover IPR investigation, with the involvement of foreign law enforcement agencies. The investigation focused on individuals and organizations, known as “warez” release groups, that specialize in the Internet distribution of pirated materials. In April 2004, these investigations resulted in 120 simultaneous searches worldwide (80 in the United States) by law enforcement entities from 10 foreign countries and the United States in an effort known as “Operation Fastlink.” Law enforcement officials told us that IPR-related investigations with an international component can be instigated by, for example, industry complaints to agency headquarters or field offices. Investigations are pursued if criminal activity is suspected. U.S. officials noted that foreign law enforcement action may be encouraged by the U.S. government if an investigation results in evidence demonstrating that someone has violated U.S. law and if evidence in furtherance of the crime is located overseas. A Justice official added that international investigations are pursued when there is reason to believe that foreign authorities will take action and that additional impact, such as raising public awareness about IPR crimes, can be achieved. Evidence can be developed through investigative cooperation between U.S. and foreign law enforcement. In addition, the Justice official emphasized that the department also supports prosecutorial efforts in foreign countries. International cooperation between the United States and other countries can be facilitated through Mutual Legal Assistance Treaties (MLATs), which are designed to facilitate the exchange of information and evidence for use in criminal investigations and prosecutions. MLATs include the power to summon witnesses, compel production of documents and other real evidence, issue search warrants, and serve process. A Justice official emphasized that informal international cooperation can also be extremely productive. Although investigations can result in international actions such as those cited above, law enforcement officials from the FBI told us that they cannot determine the number of past or present IPR cases with an international component because they do not track or categorize cases according to this factor. DHS officials emphasized that a key component of their enforcement authority is a “border nexus.” Investigations have an international component established when counterfeit goods are brought into the United States, and DHS officials noted that it is a rare exception when DHS IPR investigations do not have an international component. However, DHS does not track cases by a specific foreign connection. The overall number of IPR-oriented investigations that have been pursued by foreign authorities as a result of DHS efforts is unknown. DHS seizures of goods that violated IPR totaled more than $90 million in fiscal year 2003. While the types of imported products seized have varied little from year to year (in recent years, products such as cigarettes, wearing apparel, watches, and media products—CDs, DVDs, and tapes— have been key products), the value of seizures for some of these products has varied greatly. For example, in fiscal year 1999, the value of seized media products—for example, CDs, DVDs, and tapes—was, at nearly $40 million, notably higher than the value of any other product; by 2003, the value of seized counterfeit cigarettes, at more than $40 million, was by far the highest, while media products accounted for less than $10 million in seizures. Seizures of IPR-infringing goods have involved imports primarily from Asia. In fiscal year 2003, goods from China accounted for about two- thirds of the value of all IPR seizures, many of them shipments of cigarettes. Other seized goods from Asia that year originated in Hong Kong and Korea. DHS has highlighted particular recent seizures, such as an estimated $500,000 in electrically heated coffee mugs bearing counterfeit Underwriters Laboratories (UL) labels and an estimated $644,000 in pirated video game CDs. A DHS official pointed out that providing protection against IPR-infringing imported goods for some U.S. companies— entertainment companies in particular—can be difficult, because companies often fail to record their trademarks and copyrights with DHS. The USITC investigates and adjudicates Section 337 cases, which involve allegations of certain unfair practices in import trade, generally related to patent or registered trademark infringement. Although the cases must involve merchandise originating overseas, both complainants and respondents can be from any country as long as the complainant owns and exploits an intellectual property right in the United States. U.S. administrative law judges are responsible for hearing cases and issuing an initial decision, which is then reviewed and issued, modified, or rejected by the USITC. If a violation has occurred, remedies include directing DHS officials to exclude infringing articles from entering the United States. The USITC may issue cease-and-desist orders to the violating parties. Violations of cease-and-desist orders can result in civil penalties. As of June 2004, exclusion orders remained in effect for 51 concluded Section 337 investigations, excluding from U.S. entry goods such as certain toothbrushes, memory chips, and video game accessories that were found to violate a U.S. intellectual property right. U.S. efforts have contributed to strengthened foreign IPR laws and international IPR obligations, and, while enforcement overseas remains weak, U.S. industry groups are generally supportive of U.S. efforts. U.S. actions are viewed as aggressive, and Special 301 is characterized as a useful tool in encouraging improvements overseas. However, the specific impact of many U.S. activities, such as diplomatic efforts or training and technical assistance, can be difficult to measure. Further, despite the progress that has been achieved, enforcement of IPR in many countries remains weak and, as a result, has become a U.S. government priority. Although U.S. industries recognize that problems remain, they acknowledge the many actions taken by the U.S. government, and industry representatives that we contacted in the United States and abroad were generally supportive of the U.S. efforts to pursue intellectual property protection overseas. Several representatives of major intellectual property industry associations stated that the United States is the most aggressive promoter of intellectual property rights in the world; an IPR official at the WTO concurred with this assessment, as did foreign officials. The efforts of U.S. agencies have contributed to the establishment of strengthened intellectual property legislation in many foreign countries. The United States has realized progress through bilateral efforts. For example, the Special 301 review has been cited by industry as facilitating the introduction or strengthening of IPR laws around the world over the past 15 years. In the 2004 Special 301 report, USTR noted that Poland and the Philippines had recently passed optical disc legislation aimed at combating optical media piracy; the 2003 Special 301 report had cited both countries for a lack of such legislation. Special 301 is cited by USTR and industry as an effective tool in alerting a country that it has trade problems with the United States, which is a key trading partner for numerous nations. Industry and USTR officials pointed out that countries are eager to avoid being publicly classified as problem nations. Further, according to U.S. government officials, incremental “invisible” changes take place behind the scenes as countries take actions to improve their standing on the Special 301 listing prior to its publication. USTR notes that legislative improvements have been widespread but also cites other accomplishments, such as raids against pirates and counterfeiters in Poland and Taiwan, resulting from U.S. attention and the Special 301 process. However, Special 301 can have an alienating effect when countries believe they have made substantial improvements in their IPR regimes but the report are still cites them as key problem countries. According to some officials we spoke with in Brazil and Ukraine, this happened in their countries. For example, although Ukrainian government officials we spoke with stated their desire to further respond to U.S. concerns, they expressed the view that the sanctions have run their course. They also said that the Ukrainian government cannot understand why Ukraine was targeted for sanctions while other countries where U.S. industry losses are higher have not been targeted. A USTR official responsible for IPR issues informed us that Ukraine was sanctioned because of IPR problems that the U.S. government views as serious. Additional bilateral measures are cited as successful in encouraging new improvements overseas in the framework for IPR protection. For example, following a 1998 U.S. executive order directing U.S. government agencies to ensure the legitimate use of software, USTR then addressed this issue with foreign governments and has reportedly achieved progress in addressing this violation of IPR. According to USTR, more than 20 foreign governments have issued decrees mandating that government ministries use only authorized software. As another example, the negotiation of FTAs has been cited by government and IPR industry officials as a useful tool, particularly as such agreements require IPR protections, including protection for digital products, beyond what is required in TRIPS. However, because most FTAs have been negotiated within the past 5 years, their long- term impact remains to be seen. U.S. efforts through multilateral forums have also had positive effects. For example, as a result of TRIPS obligations—which the U.S. government was instrumental in negotiating—many developing countries have improved their statutory systems for the protection of intellectual property. For example, China revised its intellectual property laws and regulations to meet its WTO TRIPS commitments. Further, in Ukraine and Russia, government officials told us that improvements to their IPR legislation was part of a movement to accede to the WTO. U.S. agencies have assisted other developing countries in drafting TRIPS-compliant laws. In addition, a WTO member country can bring disputes over TRIPS compliance to the WTO through that organization’s dispute settlement mechanism. The U.S. government has exercised this right and brought more TRIPS cases to the WTO for resolution than any other WTO member. Since 1996, the United States has brought a total of 12 TRIPS-related cases against 11 countries and the European Community (EC) to the WTO (see app. III for a listing of these cases). Of these cases, 8 were resolved before going through the entire dispute settlement process by mutually agreed solutions between the parties—the preferred outcome, according to a USTR official. In nearly all of these cases, U.S. concerns were addressed via changes in laws or regulations by the other party. Only 2 have resulted in the issuance of a final decision, or panel report, both of which were favorable rulings for the United States. In a case involving Argentina, consultations between the countries are ongoing and the case has been partially settled, and another case regarding an EC regulation protecting geographical indications is currently in panel proceedings. Despite the fact that persistent U.S. efforts have contributed to positive developments, it can be difficult to precisely measure the impact of specific U.S. activities such as policy efforts or training assistance programs. U.S. activities are not conducted in isolation, but are part of the spectrum of political considerations in a foreign country. Although regular efforts such as the annual Special 301 review or diplomatic contact may create incentives for countries to improve intellectual property protection, other factors, such as countries’ own political interests, may contribute to or hinder improvements. Therefore, it can be difficult to measure changes resulting from U.S. efforts alone. For example, China revised its intellectual property laws as a result of its accession to WTO. Although China had for some time been under pressure from the United States to improve its intellectual property protection, revisions to its intellectual property legislation were also called for by its newly acquired WTO commitments. Thus, it is nearly impossible to attribute any of these developments to particular factors or to precisely measure the influence of individual factors on China’s decision to reform. Further, officials at the U.S. Embassy in Moscow have emphasized that the regular U.S. focus on IPR issues has raised the profile of the issue with the Russian government—a positive development. However, once again, it is difficult to determine the specific current and future effects of this development on intellectual property protection. Nonetheless, despite these limitations, several agency officials we spoke with said that these activities are important and contribute to incremental changes in IPR protection (such as legislative improvements to Russia’s copyright law that were enacted in July 2004). A Commerce official also noted that regular contacts by U.S. government officials with their foreign counterparts have apparently helped some individual U.S. companies seeking to defend patent or trademark rights overseas by reminding foreign officials that their administrative proceedings for such protection are under U.S. scrutiny. Regarding training activities, officials at agencies that provide regular training reported using post-training questionnaires by attendees to evaluate the trainings, but several noted that beyond these efforts, assessing the impact of trainings is challenging. An official at USPTO stated that although he does not believe it is possible to quantify fully the impact of USPTO training programs, accumulated anecdotal evidence from embassies and the private sector has led the office to believe that the activities are useful and have resulted in improvements in IPR enforcement. USPTO recently began sending impact evaluation questionnaires to training attendees 1 year after the training, to try to gather more information on long-term impact. However, a low response rate has thus far limited the effectiveness of this effort. Officials from the Departments of State and Commerce also pointed out anecdotal evidence that training and technical assistance activities are having a positive impact on the protection of intellectual property overseas. Although some industry officials raised criticisms or offered suggestions for improving training, including using technology to offer more long-distance training and encouraging greater USAID involvement in coordination efforts, many were supportive of U.S. training efforts. Despite improvements in intellectual property laws, the enforcement of intellectual property rights remains weak in many countries, and U.S. government and industry sources note that improving enforcement overseas is now a key priority. USTR’s most recent Special 301 report states that “although several countries have taken positive steps to improve their IPR regimes, the lack of IPR protection and enforcement continues to be a global problem.” For example, although the Chinese government has improved its statutory IPR regime, USTR remains concerned about enforcement in that country. According to USTR, counterfeiting and piracy remain rampant in China and increasing amounts of counterfeit and pirated products are being exported from China. USTR’s 2004 Special 301 report states that “ddressing weak IPR protection and enforcement in China is one of the Administration’s top priorities.” Further, Brazil has adopted modern copyright legislation that appears to be generally consistent with TRIPS, but it has not undertaken adequate enforcement actions, according to USTR’s 2003 Special 301 Report. In addition, as noted above, although Ukraine has shut down offending domestic optical media production facilities, pirated products continue to pervade Ukraine, and, according to USTR’s 2004 Special 301 Report, Ukraine is also a major trans-shipment point and storage location for illegal optical media produced in Russia and elsewhere as a result of weak border enforcement efforts (see fig. 1). An industry official pointed out that addressing foreign enforcement problems is a difficult issue for the U.S. government. Although U.S. law enforcement does undertake international cooperative activities to enforce intellectual property rights overseas, executing these efforts can prove difficult. For example, according to DHS and Justice officials, U.S. efforts to investigate IPR violations overseas are complicated by a lack of jurisdiction as well as by the fact that U.S. officials must convince foreign officials to take action. Further, a DHS official noted that in some cases, activities defined as criminal in the United States are not viewed as an infringement by other countries, and U.S. law enforcement agencies can therefore do nothing. In particular, this official cited China as a country that has not cooperated in investigating IPR violations. However, according to DHS, recently the Chinese government assisted DHS in an undercover IPR criminal investigation (targeting a major international counterfeiting network that distributed counterfeit motion pictures worldwide) that resulted in multiple arrests and seizures. While less constrained than law enforcement, training and technical assistance activities may also be unable to achieve the desired improvements in IPR enforcement in some cases, even when considerable U.S. assistance is provided. For example, despite USAID’s long-term commitment to strengthening IPR protection in Egypt with training and technical assistance programs, Egypt was elevated to the Priority Watch List in the 2004 Special 301 report and IPR enforcement problems clearly persist. Despite the weakness of IPR enforcement in many countries, industry groups representing intellectual property concerns for U.S. industries we contacted were generally supportive of U.S. government efforts to protect U.S. intellectual property overseas. Numerous industry representatives in the U.S. and overseas expressed satisfaction with a number of U.S. activities as well as with their interactions and collaborations with U.S. agencies and embassies in support of IPR. Industry representatives have been particularly supportive of the Special 301 process, and many credited it for IPR improvements worldwide. According to an official from a key industry association, Special 301 “is a great statutory tool, it leads to strong and effective interagency coordination, and it gets results.” Industry associations overseas and in the U.S. support the Special 301 process with information based on their experiences in foreign countries. An entertainment software industry official stated that the U.S. government has “consistently demonstrated their strong and continuing commitment to creators…pressing for the highest attainable standards of protection for intellectual property rights….One especially valuable tool has been the Special 301 review process.” Other representatives have advocated increased use of leverage provided by trade preference programs, particularly the GSP program. Industry association officials in the United States and private sector officials in Brazil, Russia, and Ukraine also expressed support for U.S. IPR training activities, despite limited evidence of long-term impact. Industry associations regularly collaborate with U.S. agencies to sponsor and participate in training events for foreign officials. A number of government and law enforcement officials in our case study countries commented that training and seminars sponsored by the U.S. government were valuable as forums for learning about IPR. Others, including private sector officials, commented on the importance of training as an opportunity for networking with other officials and industry representatives concerned with IPR enforcement. Nonetheless, some industry officials acknowledged that U.S. actions cannot always overcome challenges presented by political and economic factors in other countries. Industry support occurs in an environment where, despite improvements such as strengthened foreign IPR legislation, the situation may be worsening overall for some intellectual property sectors. For example, according to copyright industry estimates, losses due to piracy grew markedly in recent years. The entertainment and business software sectors, for example, which are very supportive of USTR and other agencies, face an environment where their optical media products are increasingly easy to reproduce, and digitized products can be distributed around the world quickly and easily via the Internet. According to an intellectual property association representative, counterfeiting trademarks has also become more pervasive in recent years. Counterfeiting affects more than just luxury goods; it also affects various industrial goods. Several interagency mechanisms exist to coordinate overseas intellectual property policy initiatives, development and assistance activities, and law enforcement efforts, although these mechanisms’ level of activity and usefulness varies. The mechanisms include interagency coordination on trade (IPR) issues; the IPR Training Coordination Group, which maintains a database of training activities; the National Intellectual Property Law Enforcement Coordination Council; and the National IPR Coordination Center. Apart from formal coordination bodies, regular, informal communication and coordination regarding intellectual property issues also occurs among policy agencies in the United States and in overseas embassies and is viewed as important to the coordination process. According to government and industry officials, an interagency trade policy mechanism established by Congress has operated effectively in reviewing IPR issues (see fig. 2). In 1962, the Congress established the mechanism to assist USTR in developing policy on trade and trade-related investment, and the annual Special 301 review is conducted with this tool. Three tiers of committees constitute the principal mechanism for developing and coordinating U.S. government positions on international trade, including IPR. The Trade Policy Review Group (TPRG) and the Trade Policy Staff Committee (TPSC), administered and chaired by USTR, are the subcabinet interagency trade policy coordination groups that participate in trade policy development. More than 80 working-level subcommittees are responsible for providing specialized support for the TPSC. One of the specialized subcommittees is central to conducting the annual Special 301 review and determining the results of the review. During the 2004 review, which began early in the year, the Special 301 subcommittee met formally seven times, according to a USTR official. The subcommittee reviewed responses to a Federal Register request for information about intellectual property problems around the world; it also reviewed responses to a cable sent to every U.S. embassy soliciting specific information on IPR issues. IPR industry associations provided lengthy, detailed submissions to the U.S. government during the Special 301 review; such submissions identify IPR problems in countries around the world and are an important component in making a determination as to which countries will be cited in the final report. After reaching its own decisions on country placement, the subcommittee submitted its proposal to the Trade Policy Staff Committee. The TPSC met twice and submitted its recommendations to the TPRG for final approval. The TPRG reached a final decision via e-mail, and the results of this decision were announced with the publication of the Special 301 report on May 3, 2004. The entire process for 2004 is considered typical of how the annual process is usually conducted. In addition, this subcommittee can meet at other times to address IPR issues as appropriate. According to U.S. government and industry officials, this interagency process is rigorous and effective. A USTR official stated that the Special 301 subcommittee is very active, and subcommittee leadership demonstrates a willingness to revisit issues raised by other agencies and reconsider positions. A Commerce official told us that the Special 301 review is one of the best tools for interagency coordination in the government and that the review involves a “phenomenal” amount of communication. A Copyright Office official noted that coordination during the review is frequent and effective. A representative for copyright industries also told us that the process works well and is a solid interagency effort. The IPR Training Coordination Group, intended to inform its participants about IPR training activities and facilitate collaboration, developed a database to record and track training events, but we found that the database was incomplete. This voluntary, working-level group comprises representatives of U.S. agencies and industry associations involved in IPR programs and training and technical assistance efforts overseas or for foreign officials. Meetings are held approximately every 4 to 6 weeks and are well attended by government and private sector representatives. The State Department leads the group and supplies members with agendas and meeting minutes. Training Coordination Group meetings in 2003 and 2004 have included discussions on training “best practices,” responding to country requests for assistance, and improving IPR awareness among embassy staff. According to several agency and private sector participants, the group is a useful mechanism that keeps participants informed of the IPR activities of other agencies or associations and provides a forum for coordination. Since it does not independently control budgetary resources, the group is not responsible for sponsoring or evaluating specific U.S. government training events. One agency official noted that, partly owing to the lack of funding coordination, the training group serves more as a forum to inform others regarding already-developed training plans than as a group to actively coordinate training activities across agencies. Officials at the Department of Commerce’s Commercial Law Development Program and USPTO commented that available funds, more than actual country needs, often determine what training they are able to offer. A private sector official also voiced this concern, and several agency and industry officials commented that more training opportunities were needed. A Justice official also noted that if there were more active interagency consultations, training could be better targeted to countries that need criminal enforcement training. The Training Coordination Group helped develop a public training database, which it uses as a resource to identify planned activities and track past efforts. However, although the database has improved in recent years to include more training events and better information, it remains incomplete. Officials from the Copyright Office and USPTO stated that the database should contain records of all of their training efforts, but officials from other agencies, including the Departments of Commerce, State, and Justice, and the FBI, acknowledged that it might not record all the training events they have conducted. Although the group’s meetings help to keep the database updated by identifying upcoming training offered by members that have not been entered into the database, training activities that are not raised at the meeting or that are sponsored by embassies or an agency not in attendance may be overlooked. In addition, USAID submits training information only once per year at the conclusion of its own data-gathering exercise. Since USAID is a major sponsor of training activities—in 2002, according to the database, USAID sponsored or cosponsored nearly one- third of the total training events—the lack of timely information is notable. Several members expressed frustration that USAID does not contribute to the database regularly and inform the group about training occurring through its missions. USAID officials noted that the decentralization of their agency makes it difficult for them to address these concerns, because individual missions plan and implement training and technical assistance activities independently. The National Intellectual Property Law Enforcement Coordination Council (NIPLECC), created by the Congress in 1999 to coordinate domestic and international intellectual property law enforcement among U.S. federal and foreign entities, seems to have had little impact. NIPLECC consists of (1) the Under Secretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office; (2) the Assistant Attorney General, Criminal Division; (3) the Under Secretary of State for Economic and Agricultural Affairs; (4) the Deputy United States Trade Representative; (5) the Commissioner of Customs; and (6) the Under Secretary of Commerce for International Trade. NIPLECC is also required to consult with the Register of Copyrights on law enforcement matters relating to copyright and related rights and matters. NIPLECC’s authorizing legislation did not include the FBI as a member of NIPLECC, despite its pivotal role in law enforcement. However, according to representatives of the FBI, USPTO, and Justice, the FBI should be a member. NIPLECC, which has no independent staff or budget, is cochaired by USPTO and Justice. In the council’s nearly 4 years of existence, its primary output has been three annual reports to the Congress, which are required by statute. In its first year, according to the first annual report, NIPLECC met four times to begin shaping its agenda. It also consulted with industry and accepted written comments from the public related to what matters the council should address and how it should structure council-industry cooperation. It drafted a working paper detailing draft goals and proposed activities for the council. Goals and activities identified in the first report were “draft” only, because of the imminent change in administration. Although left open for further consideration, the matters raised in this report were not specifically addressed in any subsequent NIPLECC reports. NIPLECC’s second annual report states that the council’s mission includes “law enforcement liaison, training coordination, industry and other outreach and increasing public awareness.” In particular, the report says, the council “determined that efforts should focus on a campaign of public awareness, at home and internationally, addressing the importance of protecting intellectual property rights.” However, other than a one-page executive summary of NIPLECC’s basic mission, the body of the second annual report consists entirely of individual agencies’ submissions on their activities and details no activities undertaken by the council. NIPLECC met twice in the year between the first and second reports. The third annual report also states that “efforts should focus on a campaign of public awareness, at home and internationally, addressing the importance of intellectual property rights.” Although this is identical to the language in the previous year’s report, there is little development of the theme, and no evidence of actual progress over the course of the previous year. Like the previous year’s report, other than a single-page executive summary, the body of the report consists of individual agency submissions detailing agency efforts, not the activities or intentions of the council. The report does not provide any detail about how NIPLECC has, in its third year, coordinated domestic and international intellectual property law enforcement among federal and foreign entities. Under its authorizing legislation, NIPLECC has a broad mandate. According to interviews with industry officials and officials from NIPLECC member agencies, and as evidenced by its own legislation and reports, NIPLECC continues to struggle to define its purpose and has as yet had little discernable impact. Indeed, officials from more than half of the member agencies offered criticisms of the NIPLECC, remarking that it is unfocused, ineffective, and “unwieldy.” In official comments to the council’s 2003 annual report, major IPR industry associations expressed a sense that NIPLECC is not undertaking any independent activities or effecting any impact. One industry association representative stated that there is a need for law enforcement to be made more central to U.S. IPR efforts and said that although he believes the council was created to deal with this issue, it has “totally failed.” The lack of communication regarding enforcement results in part from complications such as concerns regarding the sharing of sensitive law enforcement information and from the different missions of the various agencies involved in intellectual property actions overseas. According to an official from USPTO, NIPLECC is hampered primarily by its lack of independent staff and funding. He noted, for example, a proposed NIPLECC initiative for a domestic and international public awareness campaign that has not been implemented owing to insufficient funds. According to a USTR official, NIPLECC needs to define a clear role in coordinating government policy. A Justice official stressed that, when considering coordination, it is important to avoid creating an additional layer of bureaucracy that may detract from efforts devoted to each agency’s primary mission. This official also commented that while NIPLECC’s stated purpose of enhancing interagency enforcement coordination has not been achieved, the shortcomings of NIPLECC should not suggest an absence of effective interagency coordination elsewhere. Despite NIPLECC’s difficulties thus far, we heard some positive comments regarding this group. For example, an official from USPTO noted that the IPR training database web site resulted from NIPLECC efforts. Further, an official from the State Department commented that NIPLECC has had some “trickle-down” effects, such as helping to prioritize the funding and development of the intellectual property database at the State Department. Although NIPLECC principals meet infrequently and NIPLECC has undertaken few concrete activities, this official noted that NIPLECC is the only forum for bringing enforcement, policy, and foreign affairs agencies together at a high level to discuss intellectual property issues. A USPTO official stated that NIPLECC has potential, but needs to be “energized.” The National IPR Coordination Center (the IPR Center) in Washington, D.C., a joint effort between DHS and the FBI, began limited operations in 2000. According to a DHS official, the coordination between DHS, the FBI, and industry and trade associations makes the IPR Center unique. The IPR Center is intended to serve as a focal point for the collection of intelligence involving copyright and trademark infringement, signal theft, and theft of trade secrets. Center staff analyze intelligence that is collected through industry referrals of complaints (allegations of IPR infringements) and, if criminal activity is suspected, provide the information for use by FBI and DHS field components. The FBI at the IPR Center holds quarterly meetings with 11 priority industry groups to discuss pressing issues on violations within the specific jurisdiction of the FBI. Since its creation, the IPR Center has received 300 to 400 referrals, according to an IPR Center official. The center is also involved in training and outreach activities. For example, according to IPR Center staff, between May 2003 and April 2004, personnel from the center participated in more than 16 IPR training seminars and conducted 22 outreach events. The IPR Center is not widely used by industry. An FBI official associated with the IPR Center estimated that about 10 percent of all FBI industry referrals come through the center rather than going directly to FBI field offices. DHS officials noted that “industry is not knocking the door down” and that the IPR Center is perceived as underutilized. An FBI official noted that the IPR Center is functional but that it generally provides training, outreach, and intelligence to the field rather than serving as a primary clearinghouse for referral collection and review. The IPR Center got off to a slow start partly because, according to an FBI official, after the events of September 11, 2001, many IPR Center staff were reassigned, and the center did not become operational until 2002. The IPR Center is authorized for 24 total staff (16 from DHS and 8 from the FBI); as of July 2004, 20 staff (13 DHS, 7 FBI) were “on board” at the center, according to an IPR Center official. This official noted that the center’s use has been limited by the fact that big companies have their own investigative resources, and not all small companies are familiar with the IPR Center. In addition to the formal coordination efforts described, policy agency officials noted the importance of informal but regular communication among staff at the various agencies involved in the promotion or protection of intellectual property overseas. Several officials at various policy- oriented agencies, such as USTR and the Department of Commerce, noted that the intellectual property community was small and that all involved were very familiar with the relevant policy officials at other agencies in Washington, D.C. One U.S. government official said, “No one is shy about picking up the phone.” Further, State Department officials at U.S. embassies also regularly communicate with Washington, D.C. agencies regarding IPR matters and U.S. government actions. Agency officials noted that this type of coordination is central to pursuing U.S. intellectual property goals overseas. Although communication between policy and law enforcement agencies can occur through forums such as the NIPLECC, these agencies do not share specific information about law enforcement activities systematically. According to an FBI official, once a criminal investigation begins, case information stays within the law enforcement agencies and is not shared. A Justice official emphasized that criminal enforcement is fundamentally different from the activities of policy agencies and that restrictions exist on Justice’s ability to share investigation information, even with other U.S. agencies. Law enforcement agencies share investigation information with other agencies on an “as-needed” basis, and a USTR official said that there is no systematic means for obtaining information on law enforcement cases with international implications. An official at USPTO commented that coordination between policy and law enforcement agencies should be “tighter” and that both policy and law enforcement could benefit from improved communication. For example, in helping other countries draft IPR laws, policy officials could benefit from information on potential law enforcement obstacles identified by law enforcement officials. Officials at the Department of State and USTR identified some formal and informal ways that law enforcement information may be incorporated into policy discussions and activities. They noted that enforcement agencies such as Justice and DHS participate in the formal Special 301 review and that officials at embassies or policy agencies consult and make use of the publicly available DHS seizure data on IPR-violating products. For example, a USTR official told us that USTR had raised seizures at U.S. borders in bilateral discussions with the Chinese. Discussions addressed time-series trends, both on an absolute and percentage basis, for the overall seizure figures available from DHS. This official noted that the agency will generally raise seizure figures with a foreign country if that country is a major violator, has consistently remained near the top of the list of violators, and/or has increasingly been the source of seized goods. In addition, a Justice official noted that the department increasingly engages in policy activities, such as the Special 301 annual review and the negotiation of free trade agreements, as well as training efforts, to improve coordination between policy and law enforcement agencies and to strengthen international IPR enforcement. The impact of U.S. activities is challenged by numerous factors. For example, internally, competing U.S. policy objectives can affect how much the U.S. government can accomplish. Beyond internal factors, the willingness of a foreign country to cooperate in improving its IPR is affected by that country’s domestic policy objectives and economic interests, which may complement or conflict with U.S. objectives. In addition, many economic factors, including low barriers to entering the counterfeiting and piracy business and large price differences between legitimate and fake goods as well as problems such as organized crime, pose challenges to U.S. and foreign governments’ efforts, even in countries where the political will for protecting intellectual property exists. Because intellectual property protection is one among many objectives that the U.S. government pursues overseas, it is viewed in the context of broader U.S. foreign policy interests where other objectives may receive a higher priority at certain times in certain countries. Industry officials with whom GAO met noted, for example, their belief that policy priorities related to national security were limiting the extent to which the United States undertook activities or applied diplomatic pressure related to IPR issues in some countries. Officials at the Department of Justice and the FBI also commented that counterterrorism, not IPR, is currently the key priority for law enforcement. Further, although industry is supportive of U.S. efforts, many industry representatives commented that U.S. agencies need to increase the resources available to better address IPR issues overseas. The impact of U.S. activities is affected by a country’s own domestic policy objectives and economic interests, which may complement or conflict with U.S. objectives. U.S. efforts are more likely to be effective in encouraging government action or achieving impact in a foreign country if support for intellectual property protection exists there. Groups in a foreign country whose interests align with that of the United States can bolster U.S. efforts. For example, combating music piracy in Brazil has gained political attention and support because Brazil has a viable domestic music industry and thus has domestic interests that have become victims of widespread piracy. Further, according to a police official in Rio de Janeiro, efforts to crack down on street vendors are motivated by the loss of tax revenues from the informal economy. The unintended effect of these local Brazilian efforts has been a crackdown on counterfeiting activities because the informal economy is often involved in selling pirated and counterfeit goods on the streets. Likewise, the Chinese government has been working with a U.S. pharmaceutical company on medicines safety training to reduce the amount of fake medicines produced in China (see fig. 3). However, U.S. efforts are less likely to achieve impact if no such domestic support exists in other nations. Although U.S. options such as removing trade preference program benefits, considering trade sanctions, or visibly publicizing weaknesses in foreign IPR protection can provide incentives for increased protection of IPR, such policies may not be sufficient alone to counter existing incentives in foreign countries. In addition, officials in some countries view providing strong intellectual property protection as an impediment to development. A Commission on Intellectual Property Rights (established by the British government) report points out that strong IPR can allow foreign firms selling to developing countries to drive out domestic competition by obtaining patent protection and to service the market through imports rather than domestic manufacture, or that strong intellectual property protection increases the costs of essential medicines and agricultural inputs, affecting poor people and farmers particularly negatively. A lack of “political will” to enact IPR protections makes it difficult for the U.S. government to achieve impact in locations where a foreign government maintains such positions. Many economic factors complicate and challenge U.S. and foreign governments’ efforts, even in countries where the political will for protecting intellectual property exists. These factors include low barriers to entering the counterfeiting and piracy business and potentially high profits for producers. For example, one industry pointed out that it is much more profitable to buy and resell software than to sell cocaine. In addition, the low prices of fake products are attractive to consumers. The economic incentives can be especially acute in countries where people have limited income. Moreover, technological advances allowing for high-quality inexpensive and accessible reproduction and distribution in some industries have exacerbated the problem. Further, many government and industry officials also believe the chance of getting caught for counterfeiting and piracy, as well as the penalties even if caught, are too low. For example, FBI officials pointed out that domestic enforcement of intellectual property laws has been weak, and consequently the level of deterrence has been inadequate. These officials said that criminal prosecutions and serious financial penalties are necessary to deter intellectual property violations. The increasing involvement of organized crime in the production and distribution of pirated products further complicates enforcement efforts. Federal and foreign law enforcement officials have linked intellectual property crime to national and transnational organized criminal operations. According to the Secretary General of Interpol, intellectual property crime is now dominated by criminal organizations, and law enforcement authorities have identified some direct and some alleged links between intellectual property crime and paramilitary and terrorist groups. Justice Department officials noted that they are aware of the allegations linking intellectual property crime and terrorist funding and that they are actively exploring all potential avenues of terrorist financing, including through intellectual property crime. However, to date, U.S. law enforcement has not found solid evidence that intellectual property has been or is being pirated in the United States by or for the benefit of terrorists. The involvement of organized crime increases the sophistication of counterfeiting operations, as well as the challenges and threats to law enforcement officials confronting the violations. Moreover, according to officials in Brazil, organized criminal activity surrounding intellectual property crime is linked with official corruption, which can pose an additional obstacle to U.S. and foreign efforts to promote enhanced enforcement. Many of these challenges are evident in the optical media industry, which includes music, movies, software, and games. Even in countries where interests exist to protect domestic industries, such as the domestic music industry in Brazil or the domestic movie industry in China, economic and law enforcement challenges can be difficult to overcome. For example, the cost of reproduction technology and copying digital media is low, making piracy an attractive employment opportunity, especially in a country where formal employment is hard to obtain. According to the Business Software Alliance, a CD recorder is relatively inexpensive (less than $1,000). The huge price differentials between pirated CDs and legitimate copies also create incentives on the consumer side. For example, when we visited a market in Brazil, we observed that the price for a legitimate DVD was approximately ten times the price for a pirated DVD. Even if consumers are willing to pay extra to purchase the legitimate product, they may not do so if the price differences are too great for similar products. We found that music companies have experimented with lowering the price of legitimate CDs in Russia and Ukraine. A music industry representative in Ukraine told us that this strategy is intended to make legitimate products really affordable to consumers. However, whether this program is successful in gaining market share and reducing sales of pirated CDs is unclear. During our visit to a large Russian marketplace, a vendor encouraged us to purchase a pirated CD despite the fact that she also had the same CD for sale under the legitimate reduced-price program. Further, the potentially high profit makes optical media piracy an attractive venture for organized criminal groups. Industry and government officials have noted criminal involvement in optical media piracy and the resulting law enforcement challenges. Recent technological advances have also exacerbated optical media piracy. The mobility of the equipment makes it easy to transport it to another location, further complicating enforcement efforts. Industry and government officials described this phenomenon as the “whack-a-mole” problem, noting that when progress is made in one country, piracy operations often simply move to a neighboring location. According to a Ukraine official, many production facilities moved to Russia after Ukraine started closing down CD plants. These economic incentives and technological developments have resulted in particularly high rates of piracy in the optical media sector. Likewise, the Internet provides a means to transmit and sell illegal software or music on a global scale. According to an industry representative, the ability of Internet pirates to hide their identities or operate from remote jurisdictions often makes it difficult for IPR holders to find them and hold them accountable. To seek improved protection of U.S. intellectual property in foreign countries, U.S. agencies make use of a wide array of tools and opportunities, ranging from routine discussions with foreign government officials, to trade sanctions, to training and technical assistance, to presidential-level dialogue. The U.S. government has demonstrated a commitment to addressing IPR issues in foreign countries using multiple agencies and U.S. embassies overseas. However, law enforcement actions are more restricted than other U.S. activities, owing to factors such as a lack of jurisdiction overseas to enforce U.S. law. U.S. agencies and industry communicate regularly, and industry provides important support for various agency activities. Although the results of U.S. efforts to secure improved intellectual property protection overseas often cannot be precisely identified, the U.S. government is clearly and consistently engaged in this area and has had a positive impact. Agency and industry officials have cited the Special 301 review most frequently as the U.S. government tool that has facilitated IPR improvements overseas. The effects of U.S. actions are most evident in strengthened foreign IPR legislation and new international obligations. Industry clearly supports U.S. efforts, recognizing that they have contributed to improvements such as strengthened IPR laws overseas. U.S. efforts are now focused on enforcement, since effective enforcement is often the weak link in intellectual property protection overseas and the situation is deteriorating for some industries. Several IPR coordination mechanisms exist, with the interagency coordination that occurs during the Special 301 process standing out as the most significant and active. Of note, the Training Coordination Group is a completely voluntary effort and is generally cited as a positive development. Further, the database created by this group is useful, although it remains incomplete. Conversely, the mechanism for coordinating intellectual property law enforcement, NIPLECC, has accomplished little that is concrete. Currently, little compelling information demonstrates a unique role for this group, bringing into question its effectiveness. In addition, it does not include the FBI, a primary law enforcement agency. Members, including NIPLECC leadership, have repeatedly acknowledged that the group continues to struggle to find an appropriate mission. As agencies continue to pursue IPR improvements overseas, they will face daunting challenges. These challenges include the need to create political will overseas, recent technological advancements that facilitate the production and distribution of counterfeit and pirated goods, and powerful economic incentives for both producers and consumers, particularly in developing countries. Further, as the U.S. government focuses increasingly on enforcement, it will face different and complex factors, such as organized crime, that may prove quite difficult to address. Because the authorizing legislation for the National Intellectual Property Law Enforcement Coordination Council (NIPLECC) does not clearly define the council’s mission, NIPLECC has struggled to establish its purpose and unique role. If the Congress wishes to maintain NIPLECC and take action to increase its effectiveness, the Congress may wish to consider reviewing the council’s authority, operating structure, membership, and mission. Such consideration could help the NIPLECC identify appropriate activities and operate more effectively to coordinate intellectual property law enforcement issues. We received technical comments from USTR, the Departments of State, Justice, and Homeland Security, the Copyright Office, and USITC. We incorporated these comments into the report as appropriate. We also received formal comment letters from the Department of Commerce (which includes comments from USPTO), the Department of Homeland Security, and USAID. USAID raised concerns regarding our findings on the agency’s contribution to an online IPR training database. No agency disagreed with our overall findings and conclusions, though all suggested several wording changes and/or additions to improve the report’s completeness and accuracy. The FBI provided no comments on the draft report. As arranged with your offices, unless you publicly announce the contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to other interested committees. We will also provide copies to the Secretaries of State, Commerce, and Homeland Security; the Attorney General; the U.S. Trade Representative; the Director of the Federal Bureau of Investigation; the Director of the U.S. Patent and Trademark Office; the Register of Copyrights; the Administrator of the U.S. Agency for International Development; and the Chairman of the U.S. International Trade Commission. We will make copies available to other interested parties upon request. If you or your staff have any questions regarding this report, please call me at (202) 512-4128. Other GAO contacts and staff acknowledgments are listed in appendix XI. The Chairmen of the House Committees on Government Reform, International Relations, and Small Business requested that we review U.S. government efforts to improve intellectual property protection overseas. This report addresses (1) the specific efforts that U.S. agencies have undertaken; (2) the impact, and industry views, of these actions; (3) the means used to coordinate these efforts; and (4) the challenges that these efforts face in generating their intended impact. To describe agencies’ efforts, as well as the impact of these efforts, we analyzed key U.S. government intellectual property reports, such as the annual “Special 301” reports for the years 1994 through 2004, and reviewed information available from databases such as the State Department’s intellectual property training database and the Department of Homeland Security’s online database of counterfeit goods seizures. To assess the reliability of the online Department of Homeland Security seizure data (www.cbp.gov/xp/cgov/import/commercial_enforcement/ipr/seizure/), we interviewed the officials responsible for collecting the data and performed reliability checks on the data. Although we found that the agency had implemented a number of checks and controls to ensure the data’s reliability, we also noted some limitations in the precision of the estimates. However, we determined that the data were sufficiently reliable to provide a broad indication of the major products seized and the main country from which the seized imports originated. Our review of the reliability of the State Department’s training database is described below as part of our work to review agency coordination. While we requested a comprehensive listing of countries assessed and GSP benefits removed due to IPR problems, USTR was unable to provide us with such data because this information is not regularly collected. We met with officials from the Departments of State, Commerce, Justice, and Homeland Security; the Office of the U.S. Trade Representative (USTR); the U.S. Patent and Trademark Office (USPTO); the Copyright Office; the Federal Bureau of Investigation (FBI); the U.S. International Trade Commission (USITC); and the U.S. Agency for International Development (USAID). We also met with officials from the following industry groups that address intellectual property issues: the International Intellectual Property Alliance, the International AntiCounterfeiting Coalition, the Motion Picture Association of America, the Recording Industry Association of America, the Entertainment Software Association, the Association of American Publishers, the Software and Information Industry Association, the International Trademark Association, the Pharmaceutical Research and Manufacturers of America, and the National Association of Manufacturers. We reviewed reports and testimonies that such groups had prepared. In addition, we attended a private sector intellectual property rights enforcement conference and a U.S. government training session sponsored by USPTO and the World International Property Organization (WIPO). We met with officials from the World Trade Organization (WTO) and WIPO in Geneva, Switzerland, to discuss their interactions with U.S. agency officials. We reviewed literature modeling trade damages due to intellectual property violations and, in particular, examined the models used to estimate such losses in Ukraine, which has been subject to U.S. trade sanctions since 2002. We met with officials to discuss the methodologies and processes employed in the Ukraine sanction case. To identify the impact of trade sanctions against Ukraine, we studied the U.S. overall imports from Ukraine as well as imports of commodities on the sanction list from Ukraine from 2000 to 2003. Finally, to verify information provided to us by industry and agency officials and obtain detailed examples of U.S. government actions overseas and the results of those actions, we traveled to four countries where serious IPR problems have been identified—Brazil, China, Russia, and Ukraine—and where the U.S. government has taken measures to address these problems. We met with U.S. embassy and foreign government officials and with U.S. companies and industry groups operating in those countries. To choose the case study countries, we evaluated countries according to a number of criteria that we established, including the extent of U.S. government involvement; the economic significance of the country and seriousness of the intellectual property problem; the coverage of key intellectual property areas (patent, copyright, and trademark) and industries (e.g., optical media, pharmaceuticals); and agency and industry association recommendations. We collected and reviewed U.S. government and industry documents in these countries. To describe and assess the coordination mechanisms for U.S. efforts to address intellectual property rights (IPR) overseas, we identified formal coordination efforts (mandated by law, created by executive decision, or occurring and documented on a regular basis) and reviewed documents describing agency participation, mission, and activities. We interviewed officials from agencies participating in the Special 301 subcommittee of the Trade Policy Staff Committee, the National Intellectual Property Law Enforcement Coordination Council, the IPR Training Coordination Group, and the IPR Center. While USTR did provide GAO with a list of agencies that participated in Special 301 subcommittee meetings during the 2004 review, USTR officials requested that we not cite this information in our report on the grounds that this information is sensitive. USTR asked that we instead list all the agencies that are invited to participate in the TPSC process, though agency officials acknowledged that, based upon their own priorities, not all agencies actually participate. We also met with officials from intellectual property industry groups who participate in the IPR Training Coordination Group and who are familiar with the other agency coordination efforts. We attended a meeting of the IPR Training Coordination Group to witness its operations, and we visited the IPR Center. To further examine the coordination of agency training efforts, we conducted a data reliability assessment of the IPR Training Database (www.training.ipr.gov) to determine whether it contained an accurate and complete record of past and planned training events. To assess the completeness and reliability of the training data in the database, we spoke with officials at the Department of State about the management of the database and with officials at the agencies about the entering of the data in the database. We also conducted basic tests of the data’s reliability, including checking to see whether agencies input information related to training events in the database and information appeared accurate. We assessed the reliability of these data to determine how useful they are to the agencies that provide IPR training, not because we wanted to include them in this report. As noted on pages 34 and 35, we determined that these data had some problems of timeliness and completeness, which limited their usefulness. Finally, we compared the data with documents containing similar information, provided by some of the agencies, to check the data’s consistency. To identify other forms of coordination, we spoke with U.S. agency officials about informal coordination and communication apart from the formal coordination bodies cited above. To identify the challenges that agencies’ activities face in generating their intended impact, we spoke with private sector and embassy personnel in the case study countries about political and economic circumstances relevant to intellectual property protection and the impact of these circumstances on U.S. activities. We also spoke with law enforcement personnel at the Departments of Justice and Homeland Security, the FBI, and foreign law enforcement agencies in Washington, D.C., and our case study countries about the challenges they face in combating intellectual property crime overseas. We visited markets in our case study countries where counterfeit and pirated merchandise is sold to compare local prices for legitimate and counterfeit products and to confirm (at times with industry experts present) that counterfeit goods are widely and easily available. We reviewed embassy cables, agency and industry reports, and congressional testimony provided by agency, industry, and overseas law enforcement officials documenting obstacles to progress in IPR protection around the world. We reviewed studies and gathered information at our interviews on the arguments for and against IPR protection in developing countries. In addition to the general discussion, we chose the optical media sector to illustrate the challenges facing antipiracy efforts. To identify the challenges, we interviewed industry representatives from the optical media sector both in the United States and overseas regarding their experiences in fighting piracy. We reviewed Special 301 reports and industry submissions to study the optical media piracy levels over the years. In Brazil, Russia, and Ukraine, we recorded the prices of legal and illegal music CDs, movies, and software at local markets. We used U.S. overall imports and import of the products on the sanction list from Ukraine. The source of the overall import data is the U.S. Bureau of the Census, and the source of the import data of the products on the sanction list is the Trade Policy Information System (TPIS), a Web site operated by the Department of Commerce. In order to assess the reliability of the overall import data, we (1) reviewed “U.S. Merchandise Trade Statistics: A Quality Profile” by the Bureau of the Census and (2) discussed the data with the Chief Statistician at GAO. We determined the data to be sufficiently reliable for our purpose, which was to track the changes in U.S. overall imports from Ukraine from 2000 through 2003. In order to assess the reliability of the data from TPIS, we did internal checks on the data and checked the data against a Bureau of the Census publication. We determined the data to be sufficiently reliable for our purpose, which was to track changes in U.S. imports from Ukraine of the goods on the sanction list. We conducted our work in Washington, D.C.; Geneva, Switzerland; Brasilia, Rio de Janeiro, and Sao Paolo, Brazil; Beijing, China; Moscow, Russia; and Kiev, Ukraine, from June 2003 through July 2004, in accordance with generally accepted government auditing standards. Turkmenistan Agreement on Trade Relations North American FTA (Mexico and Canada) Since the implementation of the WTO Agreement on Trade-Related Aspects of Intellectual Property (TRIPS) in 1996, the United States has brought a total of 12 TRIPS-related cases against 11 countries and the European Community (EC) to the WTO through that organization’s dispute settlement mechanism (see below). Of these, 8 cases were resolved by mutually agreed solutions. In nearly all of these cases, U.S. concerns were addressed via changes in laws or regulations by the other party. Only 2 (involving Canada and India) have resulted in the issuance of a panel report, both of which were favorable rulings for the United States. Consultations are ongoing in one additional case, against Argentina, and this case has been partially settled. One case, involving an EC regulation protecting geographical indications, has gone beyond consultations and is in WTO dispute settlement panel proceedings. 1. Argentina: pharmaceutical patents — Brought by U.S., DS171 and DS196 Case originally brought by the United States in May 1999. Consultations ongoing, although 8 of 10 originally disputed issues have been resolved. 2. Brazil: “local working” of patents and compulsory licensing — Brought by U.S., DS199 Case originally brought by the United States in June 2000. Settled between the parties in July 2001. Brazil agreed to hold talks with the United States prior to using the disputed article against a U.S. company. 3. Canada: term of patent protection — Brought by U.S., DS170 Case originally brought by the United States in May 1999. Panel report issued in May 2000 decided for the United States, (WT/DS170/R) later upheld by Appellate Body report. According to USTR, Canada announced implementation of a revised patent law on July 24, 2001. 4. Denmark: enforcement, provisional measures, civil proceedings — Brought by U.S., DS83 Case originally brought by United States in May 1997. Settled between the parties in June 2001. In March 2001, Denmark passed legislation granting the relevant judicial authorities the authority to order provisional measures in the context of civil proceedings involving the enforcement of intellectual property rights. 5. EC: trademarks and geographical indications — Brought by U.S., DS174 Case originally brought by U.S. in June 1999. WTO panel proceedings are ongoing. 6. Greece and EC: motion pictures, TV, enforcement — Brought by U.S., DS124 and DS125 Case originally brought by the United States in May 1998. Greece passed a law in October 1998 that provided an additional enforcement remedy for copyright holders whose rights were infringed upon by TV stations in Greece. Based on the implementation of this law, the case was settled between the parties in March 2001. 7. India: patents, “mailbox,” exclusive marketing — Brought by EC, DS79 — Brought by U.S., DS50 Case originally brought by the United States in July 1996. Panel report issued in September 1997 decided for the United States (WT/DS50/R). 8. Ireland and EC: copyright and neighbouring rights — Brought by U.S., DS82 and DS115 Case originally brought by the United States in May 1997. Settled between the parties in November 2000. Ireland passed a law and amended its copyright law in ways that satisfied U.S. concerns. 9. Japan: sound recordings intellectual property protection — Brought by EC DS42 — Brought by U.S., DS28 Case originally brought by the United States in February 1996. Settled between the parties in January 1997. Japan passed amendments to its copyright law that satisfied U.S. concerns. 10. Pakistan: patents, “mailbox,” exclusive marketing — Brought by U.S., DS36 Case originally brought by the United States in May 1996. Settled between the parties in February 1997. Pakistan issued rulings with respect to the filing and recognition of patents that satisfied U.S. concerns. 11. Portugal: term of patent protection — Brought by U.S., DS37 Case originally brought by the United States in May 1996. Settled between the parties in October 1996. Portugal issued a law addressing terms of patent protection in a way that satisfied U.S. concerns. 12. Sweden: enforcement, provisional measures, civil proceedings — Brought by U.S., DS86 Case originally brought by the United States in June 1997. Settled between the parties in December 1998. In November 1998, Sweden passed legislation granting the relevant judicial authorities the authority to order provisional measures in the context of civil proceedings involving the enforcement of intellectual property rights. Brazil is generally credited with having adequate laws to protect intellectual property, but the enforcement of these laws remains a problem. Officials we interviewed in Brazil identified several reasons for the weak enforcement, including insufficient and poorly trained police and a judiciary hampered by a lack of resources, inefficiencies and, in some cases, corruption. Most broadly, they cited the weak economy and lack of formal sector employment as reasons for the widespread sale and consumption of counterfeit goods. One Brazilian official commented that the current intellectual property protection system has generated large price gaps between legitimate and illegitimate products, making it very difficult to combat illegitimate products. However, private sector officials also pointed to high tax rates on certain goods as a reason for counterfeiting. Regardless, the sale of counterfeit merchandise abounds. One market in Sao Paulo that we visited covered many city blocks and was saturated with counterfeit products. For example, we identified counterfeit U.S. products such as Nike shoes, Calvin Klein perfume, and DVDs of varying quality. The market not only sold counterfeit products to the individual consumer, but many vendors also served as “counterfeit wholesalers” who offered even cheaper prices for purchasing counterfeit sunglasses in bulk, for example. According to industry representatives, this market also has ties to organized crime. Private and public sector officials identified two significant challenges to Brazil’s improving its intellectual property protection: establishing better border protection, particularly from Paraguay—a major source of counterfeit goods—and a better-functioning National Industrial Property Institute (INPI). The acting president of INPI acknowledged that, owing to insufficient personnel, money, and space, INPI is not functioning well and has an extremely long backlog of patent and trademark applications. Two private sector representatives commented that U.S. assistance to INPI could be very valuable. It can currently take as long as 9 years to get a patent approved. Patent problems have been exacerbated by an ongoing conflict between INPI and the Ministry of Health over the authority to grant pharmaceutical patents. A pharmaceutical industry association report claims that the current system, which requires the Ministry of Health to approve all pharmaceutical patents, is in violation of TRIPS. The U.S. government has been involved in various activities to promote better enforcement of intellectual property rights in Brazil. Brazil has been cited on the Special 301 Priority Watch List since 2002 and is currently undergoing a review to determine whether it should remain eligible for Generalized System of Preferences (GSP) benefits. In recent years, Brazilian officials have participated in training offered by USPTO in Washington, D.C., and have studied intellectual property issues in depth in the United States as participants in U.S.-sponsored programs. The Departments of State, Justice, and Homeland Security have also sponsored or participated in training events or seminars on different intellectual property issues. The Department of State’s public affairs division has also worked on public awareness events and seminars. Officials from industry associations representing American companies, as well as officials from individual companies we met with, stated that they are generally satisfied with U.S. efforts to promote the protection of IPR in Brazil. Many had regular contact with embassy personnel to discuss intellectual property issues, and several had collaborated with U.S. agencies to develop and present seminars or training events in Brazil that they believed were useful tools for promoting IPR. The private sector officials we spoke with made some suggestions for improving U.S.- sponsored assistance, including consulting with the private sector earlier to identify appropriate candidates for training. However, private and public sector officials commented regularly on the usefulness of training activities provided by the United States, and many expressed a desire for more of these services. In particular, several officials expressed a hope that the United States would provide training and technical assistance to INPI. In February 2004, a senior Department of Commerce official discussed collaboration and technical assistance matters with a Brazilian minister, and USPTO staff recently traveled to Brazil to provide training at INPI. Overall, the direct impact of U.S. efforts was difficult to determine, but U.S. involvement regarding IPR in Brazil was widely recognized. Several industry and Brazilian officials we spoke with were familiar with the Special 301 report; many in the private sector had contributed to it via different mechanisms. One industry official commented that the Special 301 process is helpful in convincing the Brazilian authorities of the importance of intellectual property protection. Others were less certain about whether the report had any impact. A Brazilian minister stated that the United States is the biggest proponent of IPR, although he did not believe that any particular U.S. program had had a direct impact on Brazilian intellectual property laws or enforcement. Others, however, believed that pressure from the U.S. government lent more credibility to the private sector’s efforts and may have contributed to changes in Brazilian intellectual property laws. Most private sector officials we spoke with agreed that the government’s interest in combating intellectual property crime has recently increased. They noted that developments have included the work of the Congressional Investigative Commission on Piracy (CPI) in the Brazilian Congress and newly formed special police groups to combat piracy. In addition, President Lula signed a law last year amending the penal code with respect to copyright violations; minimum sentences were increased to 2 years and now include a fine and provide for the seizure and destruction of counterfeit goods. However, these increased sanctions do not apply to software violations. According to an official with the Brazilian special police, the Brazilian government was moved to prosecute piracy more vigorously because government officials realized that the growing informal economy was resulting in the loss of tax revenue and jobs. However, a Brazilian state prosecutor and the CPI cited corruption and the involvement of organized crime in intellectual property violations as challenges to enforcement efforts. China’s protection of IPR has improved in recent years but remains an ongoing concern for the U.S. government and the business community. Upon accession to the WTO in December 2001, China was obligated to adhere to the terms of the Agreement on Trade-Related Aspects of Intellectual Property (TRIPS). According to the U.S. Trade Representative’s (USTR) 2003 review of China’s compliance with its WTO commitments, IPR enforcement was ineffective, and IPR infringement continued to be a serious problem in China. USTR reported that lack of coordination among Chinese government ministries and agencies, local protectionism and corruption, high thresholds for criminal prosecution, lack of training, and weak punishments hampered enforcement of IPR. Piracy rates in China continue to be excessively high and affect products from a wide range of industries. According to a 2003 report by China’s State Council’s Development Research Center, the market value of counterfeit goods in China is between $19 billion and $24 billion. Various U.S. copyright holders also reported that estimated U.S. losses due to the piracy of copyrighted materials have continued to exceed $1.8 billion annually. Pirated products in China include films, music, publishing, software, pharmaceuticals, chemicals, information technology, consumer goods, electric equipment, automotive parts, and industrial products, among many others. According to the International Intellectual Property Alliance, a coalition of U.S. trade associations, piracy levels for optical discs are at 90 percent and higher, almost completely dominating China’s local market. Furthermore, a U.S. trade association reported that the pharmaceutical industry not only loses roughly 10 to 15 percent of annual revenue in China to counterfeit products, but counterfeit pharmaceutical products also pose serious health risks. Since the first annual Special 301 review in 1989, USTR has initiated several Special 301 investigations on China’s IPR protection. However, since the conclusion of a bilateral IPR agreement with China in 1996, China has not been subject to a Special 301 investigation but has instead been subject to monitoring under Section 306. In 2004, USTR reviewed China’s implementation under Section 306 and announced that China would be subject to an out-of-cycle review in 2005. In addition to addressing China’s IPR protection through these statutory mechanisms, the U.S. government has been involved in various efforts to protect IPR in China. The U.S. government’s activities in China are part of an interagency effort involving several agencies, including USTR, State, Commerce, Justice, Homeland Security, USPTO, and the Copyright Office. In 2003, U.S. interagency actions in China to protect IPR included (1) engaging the Chinese government at various levels on IPR issues; (2) providing training and technical assistance for Chinese ministries, agencies, and other government entities on various aspects of IPR protection; and (3) providing outreach and assistance to U.S. businesses. Most private sector representatives we met with in China said that they are generally satisfied with the U.S. government’s efforts in China but noted areas for potential improvement. In 2003, U.S. government engagement with China on IPR issues ranged from high-level consultations with Chinese ministries to letters, demarches, and informal meetings between staff-level U.S. officials and their counterparts in the Chinese government. U.S. officials noted that during various visits to China in 2003, the Secretaries of Commerce and Treasury and the U.S. Trade Representative, as well as several subcabinet level officials, urged their Chinese counterparts to develop greater IPR protection. U.S. officials said that these efforts were part of an overall strategy to ensure that IPR protection was receiving attention at the highest levels of China’s government. U.S. officials also noted that the U.S. Ambassador to China has placed significant emphasis on IPR protection. In 2002 and 2003, the U.S. government held an Ambassador’s Roundtable on IPR in China that brought together representatives from key U.S. and Chinese agencies, as well as U.S. and Chinese private sector representatives. U.S. officials said that China Vice Premier Wu’s involvement in the 2003 roundtable was an indication that IPR was receiving attention at high levels of China’s government. One U.S. official stated that addressing pervasive systemic problems in China, such as lack of IPR protection, is “nearly impossible unless it stays on the radar at the highest levels” of the Chinese government. A second key component of U.S. government efforts to ensure greater protection of IPR in China involved providing numerous training programs and technical assistance to Chinese ministries and agencies. U.S. government outreach and capacity-building efforts included sponsoring speakers, seminars, and training on specific technical aspects of IPR protection to raise the profile and increase technical expertise among Chinese officials. The U.S. government targeted other programs to address the lack of criminalization of IPR violations in China. For example, an interagency U.S. government team (Justice, DHS, and Commerce) conducted a three-city capacity-building seminar in October 2003 on criminalization and enforcement. The program was cosponsored by the Chinese Procuratorate, the Chinese government’s prosecutorial arm. U.S. government officials noted that the program was unique because the seminar brought together officials from Chinese criminal enforcement agencies, including customs officials, criminal investigators, and prosecutors, as well as officials from administrative enforcement agencies. In March 2004, the Copyright Office hosted a week-long program for a delegation of Chinese copyright officials that provided technical assistance and training on copyright-related issues, including the enforcement of copyright laws, as well as outreach and relationship-building. The U.S. government has also provided outreach regarding IPR protection to U.S. businesses in China, and Commerce has played a lead role in this effort. For example, in late 2002, Commerce established a Trade Facilitation Office in Beijing to, among other things, provide outreach, advocacy, and assistance to U.S. businesses on market access issues, including IPR protection. Additionally, Foreign Commercial Service officers in China work with U.S. firms to identify and resolve cases of IPR infringement. Commerce officials indicated that increasing private sector awareness and involvement in IPR issues are essential to furthering IPR protection in China. GAO’s 2004 analysis of selected companies’ views on China’s implementation of its WTO commitments reported that respondents ranked IPR protection as one of the three most important areas of China’s WTO commitments but that most respondents thought China had implemented IPR reforms only to some or little extent. In general, other industry association and individual company representatives whom we interviewed in China were satisfied with the range of U.S. government efforts to protect IPR in China. Several industry representatives noted that they had regular contact with officials from various U.S. agencies in China and that the staff assigned to IPR issues were generally responsive to their firm’s or industry’s needs. Private sector representatives stated that the U.S. government’s capacity-building efforts were one of the most effective ways to promote IPR protection in China. Some representatives noted that Chinese government entities are generally very receptive to these types of training and information-sharing programs. However, some private sector representatives also said that the U.S. agencies could better target the programs to the appropriate Chinese audiences and follow up more to ensure that China implements the knowledge and practices disseminated through the training programs. Most private sector representatives we met with also said that the U.S. government efforts in China were generally well coordinated, but they indicated that they were not always able to determine which U.S. agency was leading the effort on a specific issue. Although Chinese laws are now, in principle, largely compliant with the strict letter of the TRIPS agreement, U.S. government and other industry groups note that there are significant gaps in the law and enforcement policies that pose serious questions regarding China’s satisfaction of the TRIPS standards of effective and deterrent enforcement. In 2003, USTR found that China’s compliance with the TRIPS agreement had been largely satisfactory, although some improvements still needed to be made. Before its accession to the WTO, China had completed amendments to its patent law, trademark law, and copyright law, along with regulations for the patent law. Within several months after its accession, China issued regulations for the trademark law and copyright law. China also issued various sets of implementing rules, and it issued regulations and implementing rules covering specific subject areas, such as integrated circuits, computer software, and pharmaceuticals. China has taken some steps in administrative, criminal, and civil enforcement against IPR violators. According to USTR’s review, the central government promotes periodic anticounterfeiting and antipiracy campaigns as part of its administrative enforcement, and these campaigns result in a high number of seizures of infringing materials. However, USTR notes that the campaigns are largely ineffective; because cases brought by the administrative authorities usually result in extremely low fines, criminal enforcement has virtually no deterrent effect on infringers. China’s authorities have pursued criminal prosecutions in a small number of cases, but the Chinese government lacks the transparency needed to determine the penalties imposed on infringers. Last, China has seen an increased use of civil actions being brought for monetary damages or injunctive relief. This suggests an increasing sophistication on the part of China’s IPR courts, as China continues to make efforts to upgrade its judicial system. However, U.S. companies complain that the courts do not always enforce China’s IPR laws and regulations consistently and fairly. Despite the overall lack of IPR enforcement in China, IPR protection is receiving attention at high levels of the Chinese government. Notably, in October 2003, the government created an IPR Leading Group, headed by a vice premier, to address IPR protection in China. Several U.S. government officials and private sector representatives told us that high-level involvement by Vice Premier Wu would be critical to the success of future developments in IPR protection in China. In April 2004, the United States pressed IPR issues with China during a formal, cabinet-level consultative forum with China called the Joint Commission of Commerce and Trade (JCCT). In describing the results of the April 2004 JCCT meeting, USTR reported that China had agreed to undertake a number of near-term actions to address IPR protection. China’s action plan included increasing penalties for IPR infringement and launching a public awareness campaign on IPR protection. Additionally, China and the United States agreed to form an IPR working group under the JCCT to monitor China’s progress in implementing its action plan. Although the Russian government has demonstrated a growing recognition of the seriousness of IPR problems in the country and has taken some actions, serious problems persist. Counterfeiting and piracy are common (see fig. 4). For example, a Microsoft official told us that approximately 80 percent of business software is estimated as pirated in Russia, and that the Russian government is a “huge” user of pirated software. Further, the pharmaceutical industry estimates that up to 12 percent of drugs on the market in Russia are counterfeit. Of particular note to the U.S. government, piracy of optical media (e.g., CDs, DVDs, etc.) in Russia is rampant. According to an official from the Russian Anti-Piracy Organization, as much as 95 percent of optical media products produced in Russia are pirated. U.S. concern focuses on the production of pirated U.S. optical media products by some or all of the 30 optical media production facilities in Russia, 17 of which are located on Russian government-owned former defense sites where it has been difficult for inspection officials to gain access (though, according to an embassy official, access has recently improved). According to a U.S. embassy official, Russian demand for optical media products is estimated at 18 million units per year, but Russian production is estimated to be 300 million units. U.S. Embassy and private sector officials believe that the excess pirated products are exported to other countries. Industry estimates losses of over $1 billion annually as a result of this illegal activity. Russia has made many improvements to its IPR legislation, but the U.S. government maintains that more changes are needed. For example, the 2004 Special 301 report states that the Russian government is still working to amend its laws on protection of undisclosed information—in particular, protection for undisclosed test data submitted to obtain marketing approval for pharmaceuticals and agricultural chemicals. Further, U.S. industry and Russian officials view Russia’s IPR enforcement as inadequate and cite this as the largest deterrent to effective IPR protection in Russia. For example, the 2004 Special 301 report emphasizes that border enforcement is considered weak and that Russian courts do not have the authority in criminal cases to order forfeiture and destruction of machinery and materials used to make pirated and counterfeit products. Further, one Russian law enforcement official told us that since IPR crimes are not viewed as posing much of a social threat, IPR enforcement is “pushed to the background” by Russian prosecutors. The U.S. government has taken several actions in Washington, D.C., and Moscow to address its concerns over Russia’s failure to fully protect IPR. Russia has been placed on USTR’s Special 301 Priority Watch List for the past 8 years (1997 through 2004). Further, a review of Russia’s eligibility under the Generalized System of Preferences (GSP) is underway owing to concerns over serious IPR problems in the country. The U.S. government has actively raised IPR issues with the Russian government, including at the highest levels. According to the Department of State, at a United States–Russia summit in September 2003, President Bush raised IPR concerns with Russian President Putin. Further, in Moscow, the U.S. Ambassador to Russia considers IPR an embassy priority and has sent letters to Russian government officials and published articles in the Russian press that outline U.S. government concerns. Many agencies resident in the U.S. Embassy in Moscow are engaged in IPR issues. The Department of State’s Economic Section is the Embassy office with primary responsibility for IPR issues. This office collaborates closely with USTR and holds interagency embassy meetings to coordinate on IPR efforts. In addition to interagency communication through these meetings, each agency is also engaged in separate efforts. For example, the Economic Section has met regularly with Russian government officials to discuss IPR issues. Justice has held two training events on IPR criminal law enforcement in 2004, and has two more events planned for this year, while the Embassy’s Public Affairs Office is involved with IPR enforcement exchange and training grants. Further, the Department of Commerce’s Foreign Commercial Service works with U.S. companies on IPR issues and sponsored a 2003 seminar on pharmaceutical issues, including IPR-related topics. According to a Justice official, U.S. law enforcement agencies are making efforts to build relationships with their Russian counterparts. Industry representatives whom we interviewed in Moscow expressed support for U.S. government efforts to improve intellectual property protection, particularly the U.S. Ambassador’s efforts to increase the visibility of IPR problems. An official from one IPR association in Moscow noted, with respect to USTR’s efforts in Russia, “No other country in the world is so protective of its copyright industries.” Industry representatives noted that the U.S. government has played an important role in realizing IPR improvements in Russia, although the Russian government is also clearly motivated to strengthen intellectual property protections as part of its preparation for joining the World Trade Organization. Further, U.S. Embassy staff believe that they have been successful in ensuring that IPR is now firmly on the “radar screen” of the Russian government. According to U.S. sources, numerous IPR laws have been enacted. For example, the Department of State has noted that the Russian government has passed new laws on patents, trademarks, industrial designs, and integrated circuits and has amended its copyright law. Further, U.S. and Russian sources note that Russia has improved its customs and criminal codes. Moreover, in 2002, the Russian government established a high-level commission, chaired by the prime minister, specifically to address intellectual property problems (although, despite a recognized desire to address IPR enforcement, the commission has reportedly not accomplished a great deal in terms of concrete achievements). In addition to these promising improvements, there have been some signs that enforcement is improving, if slowly. For example, the Russian government issued a decree banning the sale of audio and video products by Russian street vendors, and the U.S. Embassy has reported that subsequently several kiosks known to sell pirated goods were closed. Industry associations have reported that law enforcement agencies are generally willing to cooperate on joint raids, and in 2003 several large seizures were made as a result of such raids. Further, in February 2004 the Russian Anti-Piracy Organization reported that police raids involving optical media products took place almost daily all over Russia and were covered widely on national TV channels. In addition, according to the U.S. Embassy, the consumer products industry reports progress in reducing the amount of counterfeit consumer goods on the Russian market, and one major U.S. producer even claims that it has virtually eliminated counterfeiting of all its consumer goods lines. Finally, according to a U.S. Embassy official, the first prison sentence was handed down during the summer of 2004 for an IPR violator who had been manufacturing and distributing pirated DVDs. U.S. and Russian officials have identified several problems that the Russian government faces in implementing effective IPR protection in the future. Issues identified include: (1) the price of legitimate products is too high for the majority of Russians, who have very modest incomes; (2) Russian citizens and government officials are still learning about the concept of private IPR—a Russian Ministry of Press official pointed out that until the dissolution of the Soviet Union, all creations belonged to the state, and the general public and the government didn’t understand the concept of private IPR; and (3) corruption and organized crime make the effective enforcement of IPR laws difficult. Ukraine has been the subject of intense industry and U.S. government concern since 1998 owing primarily to the establishment of pirate optical media plants that produced music, video discs, and software for the Ukraine market and for export to other countries. This followed the crackdown on pirate plants in Bulgaria in 1998 that resulted in many of these manufacturers relocating to Ukraine. Regarding Ukraine, USTR cites U.S. music industry losses of $210 million in revenues in 1999, while the Motion Picture Association reported losses of $40 million. The international recording industry association estimated that the production capacity of optical media material was around 70 million units per year and the demand within Ukraine for legitimate CD was fewer than 1 million units in 2000. Further the audio and video consumer market in Ukraine has consisted overwhelmingly of pirated media. For example, in 2000, the international recording industry association estimated that 95 percent of products on the market were pirated. Further, USTR and industry cite significant counterfeiting of name brand products, pharmaceuticals, and agricultural chemicals. By 2004, IPR in Ukraine has shown improvement in several areas, although the digital media sold in the consumer retail market remain predominantly pirated. The production of such digital media in local plants has ended however, according to U.S. government and industry officials in Kiev. Further, U.S. officials noted Ukraine’s accession to key WIPO conventions and improvements in intellectual property law that represents progress in fulfilling TRIPS requirements as part of Ukraine’s WTO accession process. Remaining areas of concern regarding U.S. IPR are inadequacies in the existing optical media licensing law and the fact that Ukraine remains a key transit country for pirated products. Other areas of concern are the prevalence of pirated digital media products in the consumer retail markets, lack of law enforcement actions, and the use of illegal software by government agencies (although this situation has also improved). U.S. industry and government now seek certain amendments to intellectual property laws and better enforcement efforts, including border controls to prevent counterfeit and pirated products from entering the Ukrainian domestic retail market. The U.S. government has undertaken concerted action in Washington and Kiev to address its concerns regarding the state of intellectual property protection in Ukraine. With the emergence of serious music and audio- visual piracy, Ukraine was placed on USTR’s Special 301 Watch list in 1998. Ukraine was elevated to USTR’s Special 301 Priority Watch list for 2 years, in 1999 and 2000. In June 2000, during President Clinton’s state visit to Kiev, he and President Kuchma endorsed a U.S.-Ukrainian joint action plan to combat optical media piracy. However, slow and insufficient response by Ukraine led to its designation as a Priority Foreign Country in 2001 and to the imposition of punitive economic sanctions (100 percent duties) against Ukrainian exports to the United States valued at $75 million in 2002. The Priority Foreign Country designation remains in place. The sanctions affect a number of Ukrainian exports, including metal products, footwear, and chemicals. In addition, a U.S. government review of Ukraine’s eligibility for preferential tariffs under the GSP program was undertaken, and Ukraine’s benefits under this program were suspended in August 2001. GSP benefits have not been reinstated. In Kiev, intellectual property issues remain a priority for the U.S. Embassy, including the U.S. Ambassador. A State Department economic officer has been assigned responsibility as the focal point for such issues and has been supported in this role by the actions of other U.S. agencies. The Commercial Law Center, funded by USAID, and the Commercial Law Development Program of the U.S. Department of Commerce have provided technical advice to Ukraine as it crafted intellectual property laws. A U.S. private sector association reported that it had worked closely with USAID on projects related to commercial law development. Ukrainian legislative officials reported that training opportunities and technical assistance provided by the United States had facilitated the creation of IP legislation. Training is also focused on enforcement, including training of a Ukrainian judicial official by USPTO in Washington, D.C., during 2003. The State Department has trained police and plans further police training in Ukraine during 2004. Further, Department of Commerce officials maintain contact with U.S. firms and collect information on intellectual property issues for State and USTR. Ukraine has made improvements in its legal regime for IPR protection. According to Ukrainian officials, Ukraine passed a new criminal code with criminal liability for IPR violations, as well as a new copyright law. Ukrainian officials report that the laws are now TRIPS compliant. U.S. government documents show that Ukraine implemented an optical disk law in 2002, although it was deemed “unsatisfactory,” and sanctions remain in place based on Ukraine’s failure to enact and enforce adequate optical disk media licensing legislation. In addition, Ukraine has pursued enforcement measures to combat counterfeiting, although enforcement overall is still considered weak. USTR reported that administrative and legal pressure by the Ukrainian government led to the closure of all but one of the major pirate CD plants. Some pirate plants moved to neighboring countries. According to U.S. and private sector officials in Kiev, remaining optical plants have switched to legitimate production. However, pirated optical media are still prevalent in Ukraine, imported from Russia and elsewhere, with little effort to remove them from the market. In a visit to the Petrovska Market in Kiev, we found a well-organized series of buildings where vendors sold movies, music, software, and computer games from open-air stands. The price for a pirated music CD was $1.50, compared to legitimate CDs that were sold for almost $20 in a music store located near the market. According to USTR, Ukraine is a major trans-shipment point and storage location for illegal optical media produced in Russia and elsewhere. A Ukrainian law enforcement official reported that the number of IPR crimes detected has risen from 115 in 2001 to 374 in 2003. He noted that to date, judges have been reluctant to impose jail time, but had used fines that are small compared to the economic damages. A U.S. government official also reported that the fines are too small to be an effective deterrent. While one U.S. company told us about the lack of Ukrainian government actions regarding specific IPR enforcement issues, a large U.S. consumers goods company told us that consumer protection officials and tax police had worked with it to reduce counterfeit levels of one product line from approximately 40 percent in 1999 to close to zero percent 16 months later. The company provided 11 laboratory vans as well as personnel that could accompany police to open markets and run on-the-spot tests of products. The following are GAO’s comments on the Department of Commerce’s letter dated August 20, 2004. 1. We have reviewed the report to ensure that the term “counterfeiting” is used to refer to commercial-scale trademark-related infringements of a good or product and the term “piracy” is used to refer to commercial- scale infringements of copyright-protected works. 2. While we do not discuss “advocacy” separately in this report, this type of effort has been addressed in the policy initiatives section of the report, specifically in the discussion entitled “U.S. Officials Undertake Diplomatic Efforts to Protect Intellectual Property” (see p. 18). We note that U.S. government officials overseas, including officials from the Department of Commerce, work with U.S. companies and foreign governments to address specific IPR problems. We have also included a particular example involving Department of Commerce efforts to resolve problematic issues related to proposed Mexican legislation that involved the pharmaceutical industry. We have also added another reference to advocacy efforts on page 27. 3. We chose to emphasize IPR-specific agreements, bilateral trade agreements, and free trade agreements in our report (discussion entitled “U.S. Government Engages in IPR-Related Trade Negotiations”) because USTR officials consistently cited these agreements as central components of their IPR efforts. However, we do note the negotiation of trade and investment framework agreements in footnote 24 of the report. 4. The efforts of the Department of Commerce’s International Trade Administration (ITA) are cited in our report. The report does not specifically list the ITA, as we intentionally kept the discussion for all government entities at the “departmental” level (with a few exceptions for entities that have distinct responsibilities, such as the FBI and USPTO) without mentioning the numerous bureaus and offices involved for each department. This approach was adopted to keep the report as clear as possible for the reader. While the report does not specifically attribute Commerce’s IPR efforts to ITA, several examples of Commerce’s efforts that are listed in the report are, in fact, ITA activities. For example, in addition to the activities cited in point 2 above, Commerce (meaning ITA) is also mentioned as a participant in annual GSP and Special 301 reviews (see pp. 12 and 32), and as a participant in IPR efforts in the report’s China, Russia, and Ukraine appendixes. Further, we have specified that Commerce (meaning ITA), along with USTR, is the administrator for the private sector trade advisory committee system (p. 15). The following are GAO’s comments on the Department of Homeland Security’s letter dated August 24, 2004. 1. We have added a paragraph citing the Department of Homeland Security’s work with the World Customs Organization (see p. 17). 2. We added language on p. 22 of the report that notes that a key component of DHS authority is a “border nexus.” The following are GAO’s comments on the U.S. Agency for International Development’s letter dated August 19, 2004. 1. We agree with USAID’s point that IPR protection and enforcement are not the primary responsibility of the agency. USAID and the other 9 U.S. government entities mentioned in the report have broader missions. Rather, we state that USAID and the other U.S. government entities undertake the primary U.S. government activities to improve the protection and enforcement of U.S. intellectual property overseas. 2. As we noted in the report, the decentralized structure of USAID, whereby individual country missions plan and implement training, makes it difficult for Washington-based officials to contribute timely information to the public training database or to inform the Training Coordination Group about USAID’s training efforts. Further, several members of the Training Coordination Group are frustrated with the extent of USAID's information sharing. 3. As we note in the report, USAID submits information annually following the conclusion of its own data-gathering exercise. However, this data-gathering exercise, which contributes to the USAID trade capacity building database, does not provide information needed by the Training Coordination Group, such as dates of training or contact information, that would improve coordination. In addition to those named above, Sharla Draemel, Ming Chen, Martin de Alteriis, Matt Helm, Ernie Jackson, Victoria Lin, and Reid Lowe made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
Although the U.S. government provides broad protection for intellectual property, intellectual property protection in parts of the world is inadequate. As a result, U.S. goods are subject to piracy and counterfeiting in many countries. A number of U.S. agencies are engaged in efforts to improve protection of U.S. intellectual property abroad. This report describes U.S agencies' efforts, the mechanisms used to coordinate these efforts, and the impact of these efforts and the challenges they face. U.S. agencies undertake policy initiatives, training and assistance activities, and law enforcement actions in an effort to improve protection of U.S. intellectual property abroad. Policy initiatives include assessing global intellectual property challenges and identifying countries with the most significant problems--an annual interagency process known as the "Special 301" review--and negotiating agreements that address intellectual property. In addition, many agencies engage in training and assistance activities, such as providing training for foreign officials. Finally, a small number of agencies carry out law enforcement actions, such as criminal investigations involving foreign parties and seizures of counterfeit merchandise. Agencies use several mechanisms to coordinate their efforts, although the mechanisms' usefulness varies. Formal interagency meetings--part of the U.S. government's annual Special 301 review--allow agencies to discuss intellectual property policy concerns and are seen by government and industry sources as rigorous and effective. In addition, a voluntary interagency training coordination group meets about once a month to discuss and coordinate training activities. However, the National Intellectual Property Law Enforcement Coordination Council, established to coordinate domestic and international intellectual property law enforcement, has struggled to find a clear mission, has undertaken few activities, and is generally viewed as having little impact. U.S. efforts have contributed to strengthened intellectual property legislation overseas, but enforcement in many countries remains weak. The Special 301 review is widely seen as effective, but the impact of actions such as diplomatic efforts and training activities can be hard to measure. U.S. industry has been supportive of U.S. actions. However, future U.S. efforts face significant challenges. For example, competing U.S. policy objectives take precedence over protecting intellectual property in certain regions. Further, other countries' domestic policy objectives can affect their "political will" to address U.S. concerns. Finally, many economic factors, as well as the involvement of organized crime, hinder U.S. and foreign governments' efforts to protect U.S. intellectual property abroad.
You are an expert at summarizing long articles. Proceed to summarize the following text: Congress enacted a version of the alcohol occupational taxes over 200 years ago. This tax was repealed in 1817 but alcohol occupational taxes were again instituted in the 1860s to generate revenue for the Civil War. The current taxes essentially remained unchanged from 1950 until Congress passed the Omnibus Budget Reconciliation Act of 1987. With this act, Congress raised the rates to their current levels in response to the President’s proposal that direct beneficiaries of the regulatory provisions pay a greater share of the cost incurred to administer the SOT program. In July 1987, ATF assumed the responsibility for administering the alcohol SOT program from the Internal Revenue Service (IRS). There are separate occupational taxes for alcohol producers, wholesalers, and retailers. Each tax is a fixed amount per business location per year. The per location tax is $1,000 for large producers and $500 for small producers who grossed less than $500,000 the previous year. Producers include distillers, breweries, wineries, wine-bottling houses, and bonded wine cellars and warehouses. Wholesalers are required to pay a $500 occupational tax for each location. Retailers are required to pay a $250 occupational tax for each operating location. Retailers, who make up the largest group of alcohol SOT taxpayers, cover a wide variety of businesses—for example, liquor stores, bars, restaurants, sports facilities, grocery stores, convenience stores, airlines, caterers, and hotels. Alcohol businesses are required to obtain a special tax stamp from ATF for each operating location before commencing business. These businesses are required to obtain the special tax stamp on or before July 1 if they are to continue operating. Businesses must file a special tax renewal registration and return and pay the appropriate taxes to obtain the stamps. (App. I contains information on the annual special tax registration and return process.) The stamps must be available for inspection as proof of payment, are nontransferable, and are valid for 1 tax year. The SOT tax year begins on July 1 and ends on the following June 30. Under provisions of the Internal Revenue Code (IRC), retailers are required to keep specific records of the distilled spirits, wine, or beer received showing the quantity, source, and date of all shipments received on their premises. Retailers are also required to keep records for each sale of 20 gallons, or more, of any alcoholic beverage sold to the same person at the same time. Failure to comply with the alcohol SOT provisions can result in the assessment of civil and criminal penalties against the proprietors. The civil penalties are the failure-to-file penalty and the failure-to-timely-pay penalty, both of which are limited to 25 percent of the amount due. (App. II contains more information on the civil penalties and interest.) Any person who willfully fails to comply with the alcohol SOT provisions is subject to criminal penalties under section 5691 of the IRC. This section allows fines of up to $5,000 or imprisonment for up to 2 years, or both, for each offense. The alcohol industry is a heavily regulated industry. ATF administers a system that regulates businesses according to their function as producers, wholesalers, and retailers and requires ATF to keep track of who is operating as a producer, wholesaler, and retailer. The regulation of alcohol businesses by function is a feature of federal and state laws, which govern the production and distribution of alcohol. Producers and wholesalers are required to obtain federal permits from ATF to operate. Federal law does not require retailers to qualify for or to obtain a federal permit. Retailers are licensed by the states and some local jurisdictions. To identify the methods ATF uses to enforce compliance with SOT provisions, we discussed ongoing compliance programs with ATF officials. We reviewed samples of information prepared by ATF for public release and for inclusion in the annual registration and tax return packages. We met with industry representatives to get their views on the adequacy and availability of alcohol SOT information provided by ATF. We discussed the matching of federal and state data on retailers with ATF officials. We discussed the assessment of civil and criminal penalties with ATF officials and reviewed data on cases where ATF had imposed civil and criminal penalties. We discussed the value of the occupational tax provisions as an enforcement tool with officials from ATF’s Diversion Branch, Revenue Division, and Office of General Counsel. To identify the compliance rates for producers, wholesalers, and retailers, we reviewed fiscal year 1998 compliance information provided by ATF officials and discussed the completeness, limitations, and sources of this information. We reviewed compliance estimates for retailers reported by the IG in 1996 and discussed methodological and data limitations with the audit manager for the study. To determine the costs of collecting the special occupational taxes and alcohol excise taxes, we obtained cost information from ATF officials and discussed their methods for determining the costs of collection activities. To determine the arguments for and against continuing the alcohol SOTs, we reviewed legislative histories, our previous reports, alcohol industry publications, IG reports, and Congressional Research Service reports. We interviewed Treasury and ATF officials and industry representatives to obtain their views on the various arguments that have been made for and against the alcohol SOTs. We did our work from February through May, 1998, in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of the Treasury and the Director of ATF or their designees. Their oral and written comments are summarized near the end of the letter. ATF has implemented a combination of efforts to enforce compliance with the alcohol SOTs. These efforts include (1) sending known alcohol businesses their annual registration and stamp renewal returns, (2) matching ATF and state information on retailers, (3) publicizing occupational tax information, (4) licensing producers and wholesalers, (5) assessing civil and criminal penalties and interest, and (6) verifying SOT compliance during on-site inspections. ATF officials believe that it would not be cost-effective to commit additional resources to enforcement. In May of each year, ATF sends the special tax renewal registration and return forms to alcohol producers, wholesalers, and retailers known to the Bureau. Alcohol businesses known to ATF include producers and wholesalers who have obtained federal operating permits from ATF and retailers who paid SOTs in previous years or were identified through other means by the Bureau. ATF mails the special tax renewal registration and return form to the registered address of the primary business and shows the total amount of SOT due for all operating locations listed on the form. With the special tax registration and renewal form, ATF includes a letter to the alcohol businesses advising them to report changes in ownership and discontinued operations. ATF also at that time advises the businesses that it may assess penalties and interest if they are liable for the special tax and do not pay on a timely basis. (App. I contains additional information on the special tax stamp registration and return process.) If the taxpayer is liable for the special tax and does not pay in a timely fashion, ATF is to follow up with correspondence that advises the taxpayer of the additional interest and penalties and that further failure to comply may result in legal proceedings. ATF receives lists of alcohol retailers from all but five states. Contract staff at ATF’s National Revenue Center in Cincinnati are to manually compare the names and addresses of the businesses reported by the states as licensed alcohol retailers with the names and addresses of retailers listed in the SOT master file—a federal database of businesses that have paid SOTs in previous years or are otherwise known to ATF. By comparing the two sets of information, ATF can identify businesses that were listed by the states as licensed alcohol retailers but were not shown in the SOT master file as having paid the annual occupational taxes. The National Revenue Center is to send an information package to each of the nonmatched retailers. This package includes an ATF flyer, special tax information sheet, and the special tax renewel registration and return. This information explains the SOT requirements for alcohol retailers and wholesalers. The flyer informs the retailer that it is being notified because a state or local jurisdiction has issued it a license to sell alcoholic beverages. The flyer advises the retailer that it must file a tax return and pay the occupational tax and that failure to do so could result in costly penalties and interest. Retailers that had not engaged in or are not currently engaged in the sale of alcoholic beverages are instructed to report this so that the Bureau can update the retailer’s account and not mail additional notices. Otherwise, ATF is to update the amounts due and continue to contact the retailer by mail for up to 3 years to get compliance. ATF does not believe that it is cost effective to routinely go beyond this correspondence to ensure compliance. ATF officials believe that informing the public about the SOT requirements improves compliance. We reported in 1990 that many retailers said they did not comply with the SOT provisions because they were not aware of the requirements. Alcohol industry representatives believe that there are some retailers who may be unaware of this tax obligation. ATF uses several methods to inform the public about the SOT requirements. For the tax year 1999 filing season, ATF issued an April 3, 1998, news release to remind businesses of the July 1, 1998, deadline for the alcohol SOT. The news release was placed on the ATF’s Web site (www.atf.treas.gov.) with an authorizing note that allows editors to print the information in organizational magazines, periodicals, and newsletters. ATF has issued a similar release annually, just prior to the filing period. ATF has placed on the Web site copies of the special tax renewal and registration return and instructions, which can be downloaded for filing or informational purposes. Also on the Web site is a pamphlet entitled Liquor Laws and Regulations for Retail Dealers, which provides an overview of the SOT and other alcohol requirements pertinent to retail operations. For the 1999 tax year, which began July 1, 1998, ATF sent its SOT news release to 412 public affairs offices and general public addressees; 78 media addressees; and 248 trade associations, societies, and state addressees. The trade associations list included a variety of organizations, such as state alcohol control and licensing organizations, the National Bar Association, the National Tax and Bookkeeping Services, the American Beverage Institute, the American Hotel and Motel Association, and state and national organizations of wholesalers and retailers. ATF has produced several different flyers for distribution at trades shows and fairs. One flyer gives a sample listing of businesses that may be subject to the alcohol occupational taxes. Another flyer advises recipients that if they sell beer, wine, or liquor, they may owe federal occupational taxes. This flyer gives the tax rates for retailers and wholesalers, explains the type of sales to which the tax applies, notes the tax due date, and gives the telephone number for the Tax Processing Center in Cincinnati and a toll-free number the taxpayer may call for additional information. ATF also provides another flyer with toll-free numbers for its National Revenue Center and the Tax Processing Center. ATF has more control over alcohol producers’ and wholesalers’ compliance with the SOT requirements than over retailers’ compliance because ATF issues the federal permits for these businesses to operate and the universe of producers and wholesalers is relatively small. Alcohol businesses with federal permits are required to comply with all federal laws and regulations, including SOT requirements; and ATF can revoke their permits or charge them with fraud if they fail to do so. Additionally, the universe of producers and wholesalers is small. Tax year 1998 ATF data showed that there were about 18,000 registered alcohol producers and wholesalers compared to about 372,000 known retail entities. ATF officials believe that they can ensure greater compliance among producers and wholesalers than retailers because they have more administrative control over this smaller group of alcohol businesses. ATF can assess civil penalties and interest for failure to file the annual SOT registration and return form and/or pay the taxes due. While civil penalties are limited by statute to not more than 25 percent of the taxes due, there is no limit on the amount of interest taxpayers may incur for unpaid SOTs. (App. II contains more information on computing civil penalties and interest.) Examination of SOT revenue data for fiscal year 1995 showed that ATF assessed and collected about $972,000 in failure-to-file penalties, about $164,000 in failure-to-pay penalties, and about $410,000 in interest from alcohol, tobacco, and firearm businesses. The total penalty and interest amounts accounted for about 1.4 percent of the total SOT payments for the fiscal year. ATF was unable to separate the penalty and interest amounts collected for the three business categories but estimated that over 90 percent of the tax, penalty, and interest amounts were from the alcohol SOTs. ATF also has authority to assess criminal penalties to enforce SOT compliance. Businesses and individuals can be fined up to $5,000 or imprisoned up to 2 years, or both, for each willful failure to comply with the SOT requirements. ATF uses these criminal penalties to enforce compliance with wholesale and retail operating requirements. For example, current law prohibits a retailer from selling to other retailers or from operating as a wholesaler. Review of criminal data file information showed that ATF has a history of assessing the criminal penalties to combat the black market sale of alcohol. Alcohol businesses are required to have current Special Tax Stamps available for ATF inspection as proof that they have paid the required SOTs for their operating locations. ATF officials informed us that field office inspectors who routinely monitor compliance with alcohol laws and regulations also verify compliance with SOT requirements during visits to alcohol businesses, primarily alcohol producers and wholesalers. ATF inspectors who discover businesses that are not in compliance with SOT provisions are to report this information to the National Revenue Center. Following up on this information, ATF staff at the Center are required to notify the business to get compliance. They are to do this through correspondence with the noncompliant business. The correspondence includes SOT requirement information and the tax return that the business needs to file. The staff are to continue corresponding with the business for 3 consecutive years in an effort to get compliance with the SOT provisions. ATF does not believe that it is cost effective to conduct site visits solely for SOT compliance. Both ATF and the Treasury’s IG estimated SOT compliance rates. The two offices used different data and methods when computing their estimates, and those data and methods leave the accuracy of both offices’ estimates uncertain. The audit work needed to quantify the potential error in these estimates was beyond the scope of our study. The two offices also used different definitions of compliance. ATF’s definition covered the timely filing of the annual return and the timely payment of taxes due in response to ATF’s annual renewal notification. The IG’s definition covered only the payment of tax when a tax liability existed. ATF estimated that, as of April 3, 1998, 93 percent of the producers with permits, 95 percent of the wholesalers with permits, and 89 percent of the retailers known to ATF filed timely returns and timely paid the taxes due for tax year 1998. The noncompliant taxpayers were those known to ATF that did not respond to the annual notification process and were not identified by ATF as being out of business. ATF determined that about 0.03 percent of the producers, 3 percent of the wholesalers, and 4 percent of the retailers did not respond because they were out of business. The rates of compliance for all alcohol businesses could be lower than ATF’s estimated rates because the latter cover only alcohol businesses known to ATF—producers and wholesalers with federal permits and federally registered alcohol retailers that have paid SOTs in the past or have been identified by ATF. ATF officials acknowledged that they have not identified all alcohol businesses. They could not estimate the number of illegal producers and wholesalers that might be operating, without federal permits, as moonshiners and bootleggers. Registered alcohol businesses are required to certify that all operating locations have been correctly reported to ATF, but ATF does not verify that this has been done. However, the Bureau is confident that there is high compliance among producers and wholesalers because ATF issues federal permits for these alcohol businesses to operate and closely monitors their operations. In addition, alcohol producers and wholesalers account for a small number of businesses. ATF could not estimate the number of retailers not known to ATF who had not paid their occupational taxes. SOT compliance among retailers is more difficult to manage because they do not need federal permits to operate, have a high turnover rate, and account for a large universe of business entities. The IG reported an estimated average compliance rate of about 83 percent for retailers over tax years 1993, 1994, and 1995. The IG made this estimate by comparing ATF data on the total number of SOT stamps issued in each state and the District of Columbia and the total number of retail operating locations where alcohol is sold, as reported by state and the District licensing officials whom it surveyed. A total of 43 states and the District provided usable data. Because the IG simply compared the federal government’s count of all retail locations in a given state with that state’s own count, rather than doing a detailed matching of specific retail locations, it could have overestimated or underestimated the rate of compliance. The IG did not verify whether the states followed its instructions for enumerating the number of retail locations. The IG requested data from each state that (1) included all locations that were in operation at any time during the SOT tax years and (2) did not include locations that had ceased operations before the start of each SOT tax year. States may or may not have been able to produce enumerations from their information systems that met these exact criteria. Given the high rate of turnover among alcohol retailers, if the state data did not cover the time periods set by the IG, then those data could overestimate or underestimate the number of locations liable for tax. The fact that five states reported fewer retail locations than the ATF data showed had tax stamps for tax years 1993, 1994, and 1995 suggests that at least some states’ data did not accurately represent the occupational taxpayer population. Also, like ATF’s estimate, the IG’s estimate did not cover retail locations that operate without the required state or local licenses, such as illegal after-hour clubs. The reliability of the ATF and IG estimates is difficult to assess without a more detailed examination of the methods used and data collected. To evaluate the accuracy of the state data, one would need to know what methods the states used to collect and verify their statewide data. One would also have to estimate the number of unlicensed retailers that do not appear on any records. Because such analyses were beyond the scope of our review, we cannot say how accurate ATF’s and the IG’s compliance rates may be. Supporters of the SOTs have justified the taxes as a general source of revenue and as providing revenues intended to offset the costs of regulating the alcohol industry. ATF officials believe that the authority provided by the SOTs to enter the premises of dealers and require them to keep certain records has facilitated ATF’s efforts to enforce the laws and regulations governing the alcohol industry. ATF is concerned that the agency could lose necessary enforcement tools if the SOTs are eliminated. Historically, supporters have justified the SOTs as a general source of revenue. Congress reinstated the SOTs in the 1860’s for the purpose of raising revenue. More recently, supporters have also justified the SOTs as providing revenues intended to recoup the federal costs of regulating the alcohol industry. Congress enacted special occupational tax rates increases under the Omnibus Budget Reconciliation Act of 1987 so that the beneficiaries of ATF regulation would pay a greater share of the costs of regulation. In addition to regulatory costs, economists believe that taxes on alcohol, such as the SOTs, may be used to offset the social cost of alcohol abuse. The SOTs have long been a source of revenue for the federal government’s General Fund. After repeal in 1817, the taxes were reinstated in the 1860s to generate revenue. Under the Budget Enforcement Act of 1990 as amended, Congress must offset the budget impact of tax legislation that would reduce revenue. Eliminating the SOTs, or changing the SOTs in ways that reduce revenue, would require that Congress identify and enact revenue increases and/or spending reductions. For example, an increase in alcohol excise taxes has been suggested to offset revenue losses from repeal of the alcohol SOTs, and an increase in SOTs paid by producers and wholesalers has been proposed to offset revenue losses from repeal of the SOT on retailers. We have not evaluated these or other tax and spending alternatives. Supporters of the SOTs have justified the taxes as payments by the industry for the benefits that they claim the industry receives from regulation. If ATF’s regulatory activities benefit the industry, SOT revenue may offset the costs of providing these benefits. ATF’s regulatory activities, such as operating laboratories for testing and labeling alcoholic products, may benefit the industry if, by assuring consumers of the safety and quality of those products, the activities increase demand for alcohol products. ATF’s law enforcement activities may benefit the industry, for example, by protecting the industry from the influence of organized crime. Economists have justified taxes on alcohol as providing revenues to recoup the social cost of alcohol abuse. Although this justification is usually made for alcohol excise taxes, both the excise taxes and the SOTs can be viewed as offsetting the costs to the government and society of alcohol abuse. People who abuse alcohol may use certain government programs, such as government-provided health-care and criminal justice services, more than nonabusers. People who abuse alcohol also impose costs on other members of society, such as the lives and property lost in alcohol-related traffic accidents. The SOTs are not well designed to reflect the benefits received by the taxpayer, the cost to the government of providing the benefits, or the costs to society. First, the SOTs are not likely to reflect how much individual taxpayers may benefit from ATF’s regulatory activities because each alcohol retailer, wholesaler, and producer (collectively known as dealers) pays the same amount of tax for each premise. To the extent that dealers benefit from ATF’s activities, the benefits are likely to vary considerably across premises because profits are likely to vary considerably from one location to another. Second, the tax rates are not likely to reflect the current costs of regulation because they are rarely changed. Before 1987, the rates had not been changed since the 1950s. Although rates were increased in 1987 for the stated purpose of recouping regulatory costs, SOT revenues may have been higher or lower than regulatory costs in 1987, and the rates have not been changed since 1987 to reflect any changes in costs. Finally, the revenue from SOTs is small relative to total federal excise taxes on alcohol, and therefore, their role in offsetting the regulatory or social costs associated with alcohol is likely to be small. The SOTs have been justified as facilitating ATF’s enforcement efforts by giving the agency the authority to enter the premises of alcohol dealers and to require that the dealers keep certain records. ATF officials said that ATF uses this authority in its efforts to control the alcohol distribution system, prevent illegal sales of alcohol, and enforce all federal taxes on alcohol. ATF officials believe that the access and recordkeeping authority provided by the SOTs is necessary for ATF enforcement efforts. The SOTs allow ATF inspectors entry into establishments that permits them to inspect for other violations. Provisions of the IRC permit ATF inspectors to enter premises to examine records, documents, and any alcohol stored on the premises. Once on the premises, ATF officials say that inspectors are able to check for nonpayment of the SOTs and other violations. For example, alcohol dealers are subject to fine and/or imprisonment for refilling or reusing liquor bottles. Inspectors can check for this violation, which prohibits dealers from refilling bottles of more expensive brands with cheaper liquor. ATF officials note that ATF has access to producers and wholesalers as part of its licensing and inspection authority. Except for the provisions of the IRC that are related to SOTs, ATF has only limited authority over retailers and no access to retailers’ premises. The SOTs give ATF the authority to require that retailers and wholesalers keep records that help ATF control the alcohol distribution system. Under provisions of the IRC related to SOTs, retailers are required to record all of their purchases of alcohol and sales of alcohol of 20 gallons or more to the same person at the same time. The records must include the name and address of those from whom they purchased or to whom they sold the alcohol. Wholesalers are required to keep similar records for all of their purchases and sales. These records of transactions between dealers may help ATF enforce laws and regulations throughout the distribution system. For example, ATF officials say that the requirement that retailers record individual sales of 20 gallons or more helps ATF identify retailers who are operating as wholesalers by selling to other retailers. Retailers not paying the SOTs and unknown to ATF can be identified from the sales records of wholesalers, and the records of wholesalers and retailers can be used to trace transactions between dealers to check for payment of excise taxes. According to ATF officials, the SOTs are also useful to ATF for identifying retailers who owe floor-stock taxes. The floor-stock taxes are imposed on inventories when alcohol excise tax rates are increased and are generally equal to the difference between the old and new tax rates. ATF officials believe that the SOTs are useful in diversion cases to control distribution and enforce taxes. Diversion occurs when alcohol is sold at an illegal destination to evade federal and state excise taxes, rather than the legal destination stated on the required federal form. There are two kinds of diversion. Export diversion occurs when a dealer claims that alcohol is exported but actually sells the alcohol domestically. The dealer avoids an excise tax on this alcohol because excise taxes are not imposed on exports. Domestic diversion occurs when a dealer purchases alcohol in a low tax jurisdiction and smuggles the alcohol to a jurisdiction with higher excise tax for illegal sales. Provisions related to the SOTs can be used by ATF to combat both kinds of diversion. ATF officials say that ATF uses its access to records required by the SOTs to detect diversion by reviewing sales of unusually large volumes of alcohol and sales in certain types of containers that are easier to divert. The retailers’ records may show evidence of the domestic sale of alcohol intended for export; or the absence of such records may be grounds to prosecute retailers for receiving the alcohol intended for export. ATF has pursued both criminal and civil prosecutions of diversion cases using SOT provisions. According to ATF data, there were 23 criminal alcohol diversion cases involving SOT violations between October 1, 1996, and March 31, 1998. ATF has also pursued civil prosecutions in 86 cases of diversion involving 62 companies between December 1, 1992, and December 19, 1996. According to ATF officials, these civil cases, like the criminal cases, involve prosecutions under the SOTs. ATF officials believe that the authority for entering retail premises and requiring retailers to keep records provided by the SOTs has been necessary for its other law enforcement activities. ATF officials said that they were concerned that this authority may be jeopardized if the SOTs are eliminated, but they were uncertain about the effect that repeal of the SOTs would have on their enforcement capabilities. ATF was uncertain whether the access and recordkeeping authority may exist under other provisions of current law. ATF was not sure whether recordkeeping could be imposed under other provisions of the IRC if the retailer does not have a special occupational tax liability. If the SOTs were repealed, ATF said it could attempt to write regulations requiring specific records be kept by retailers. According to ATF officials, the courts could rule that the recordkeeping requirement is a valid exercise of the taxing power, or they could deny the authority because the activities that the Bureau wants recorded are not closely enough related to the excise tax collection process. If SOTs are eliminated and access and recordkeeping authority do not exist under other provisions of current law, ATF officials believe that the laws concerning regulation of the alcohol industry may have to be changed to permit the Bureau the same enforcement powers. The Federal Alcohol Administration Act of 1935 (FAA) regulates fair trade practices, chiefly promotional activities of dealers that affect the sales of other dealers. The act also imposes licensing requirements for wholesalers and producers, but there is no authority in this act for imposing recordkeeping requirements. If the SOTs were repealed, FAA could be expanded to impose recordkeeping requirements on retailers. ATF officials believe that the access and recordkeeping authority currently provided by the SOTs is essential for effective enforcement of alcohol laws and regulations. We have not evaluated ATF’s claim that the access and recordkeeping authority provided by the SOTs is necessary for ATF’s enforcement efforts. For example, we have not determined whether, or how seriously, repeal of the SOTs would harm ATF’s efforts to combat diversion. Although retailers would no longer have a SOT liability, ATF would still need to identify and prosecute retailers who participate in illegal sales. However, we have not determined how important access and recordkeeping authority is in the prosecution of such cases, and we are uncertain whether such authority would be lost if the SOTs were repealed. The SOTs have been criticized for being costly to administer relative to alcohol excise taxes, having low compliance rates, and being unfair. These criticisms have led some to propose changes in the SOTs that include (1) eliminating the tax on retailers to reduce administrative costs and (2) changing the structure of the tax from a fixed amount per business location to one that is based on business volume to make tax burdens fairer. Others have proposed that the SOTs be eliminated entirely. We concluded in two separate reports that the administrative costs of the SOTs were high relative to the costs of administering the alcohol and tobacco excise taxes and that compliance among retailers may have been low. Since these reports were issued, the costs of administering the SOTs have declined. Estimates of current compliance with the SOTs among retailers are uncertain. An evaluation of whether administrative costs are excessive would require that the SOTs be compared with specific alternatives in terms of compliance rates and administrative costs, as well as other factors such as the compliance burden of taxpayers. In our 1990 report, we concluded that SOT costs were high relative to the costs of administering the alcohol and tobacco excise taxes. In fiscal year 1989 (in 1997 dollars), ATF spent $13 million to collect $162.6 million of SOTs—a cost of 8 cents for every dollar collected. In the same year (also in 1997 dollars), ATF spent $64.9 million to collect $12.7 billion in alcohol and tobacco excise tax revenue—a cost of 0.5 cents per dollar collected. Thus, the cost per dollar collected was 16 times greater for the SOTs than for the excise taxes. We also found in our 1986 study of compliance with the SOTs in four states that only about 60 percent of the retailers had paid the SOTs. ATF stated that it believed that the compliance rate found in our study was probably representative of compliance nationwide in 1986. According to ATF data, the costs of administering the SOTs, and the amount of revenue collected, have declined since 1989. In fiscal year 1997, ATF spent an estimated $1.9 million to collect $107 million of SOTs—a decline in both costs and revenue from the $13 million spent to collect $162.6 million in 1989. The cost per dollar collected fell from 8 cents in 1989 to 1.8 cents in 1997. ATF also spent $55.1 million in 1997 to collect $12.6 billion in excise tax revenue—a cost of 0.4 cents per dollar collected. The relative cost of administering the SOTs dropped from 16 times as great as the cost of the excise taxes in 1989 to 4.5 times as great in 1997. SOT revenues have declined in inflation-adjusted terms, despite the fact that the number of active business locations has increased from 350,000 in 1987 to 426,370 in 1998. Part of the decline is due to the fact that the tax, as a fixed amount per business location, does not increase with price inflation. Also, according to ATF officials, SOT revenues in 1989 included large amounts of back taxes, penalties, and interest that ATF discovered were owed when ATF took over administration of the SOTs from IRS. SOT administrative costs declined because ATF has devoted fewer resources to administering the SOTs. According to an ATF official, administrative costs depend largely (1) on ATF priorities that determine how many field staff are allocated to SOT enforcement, (2) on the number of contacts with taxpayers, and (3) on the number of congressional inquiries. Currently, the SOTs are not a high priority enforcement issue for ATF. Field staff have been instructed by ATF not to pursue SOT enforcement alone, but to check for SOT payment only as part of an investigation of alcohol dealers for other violations. While administrative costs have declined, the compliance rates, especially for retailers, are uncertain. As previously discussed, some estimates indicate that compliance among retailers may have increased from the 60 percent that we reported in 1986, but these estimates have limitations that make their reliability difficult to assess. An ATF official believes that compliance rates may have increased since 1986 because, when the Bureau took over enforcement of the SOTs from IRS in 1987, it devoted more resources to enforcement than IRS had and began matching state and federal records of alcohol dealers as part of its enforcement effort. Determining whether the administrative costs of the SOTs are excessive may be difficult because one would have to compare the costs and compliance rate for the SOTs with those of alternative revenue sources. It is important to compare both administrative costs and compliance rates because a tax may appear less costly to administer only because compliance rates (and enforcement costs) are lower. However, complete and reliable data on compliance and administrative costs for both the SOTs and alternative revenue sources may not be available. For example, ATF has estimates of the costs of collecting excise taxes but does not have estimates of compliance rates for the excise taxes. A complete evaluation of the SOTs relative to alternative ways of raising revenue would have to include other factors besides administrative costs and compliance rates. The evaluation would have to include an assessment of the compliance burden imposed on taxpayers by the SOTs relative to the compliance burden of alternatives that may be proposed, as well as the relative impact of the SOTs and alternatives on the efficiency and equity of the tax system. These factors affect the total cost to society of a tax in terms of the resources taxpayers use to comply with the tax, the loss of income and output that occurs when taxes interfere with economic decisionmaking, and the losses to taxpayers who perceive the tax as producing an unfair distribution of tax burdens. Opponents of the SOTs have criticized the taxes for being unfair. Because the taxes are a fixed amount per location, they may take more income from those with less ability to pay the tax, and, if compliance rates are low, compliant taxpayers would pay an unfair share of the tax burden. Other features of the SOTs, such as the requirement that businesses operating for only part of the year pay the full yearly rate, have also been criticized for imposing unfair tax burdens. The fairness of the SOTs depends on factors such as who actually bears the burden of the tax (i.e., how much of the tax is shifted from alcohol dealers to others in the economy in the form of higher prices); the income of those who pay the tax; and the rate of compliance with the SOTs. The SOTs have been criticized as inequitable because the fixed amount of tax per business location does not vary according to the taxpayers’ ability to pay. There is no universally accepted measure of tax fairness. However, one commonly used criterion of fairness is that a tax burden should increase, at least proportionately, with the incomes of taxpayers. When this criterion is violated and the tax burden, as a percentage of income, is higher for low-income taxpayers, then the tax is considered to be “regressive.” Whether the SOTs are regressive depends on the incidence of the taxes, i.e., who actually bears the burden of the taxes, and the amount of the SOT paid by these individuals relative to their total income. The incidence of the SOTs depends on how much of the tax is shifted from the dealers to others in the economy through price changes, such as price increases to consumers of alcohol. The incidence of the SOTs is uncertain, but it is likely that, in the long run, at least a part of the SOTs is passed forward in higher prices to consumers. Determining whether the taxes are regressive requires measuring the share of the tax paid by the dealers and consumers relative to their total income. Thus, in order to determine regressivity, data are required on factors such as the total income of dealers and consumers, the effect of the SOTs on alcohol prices, and consumers’ expenditures on alcohol. However, whether the taxes are shifted to consumers or paid by the dealers, the size of the SOTs means that their effect on prices and incomes is likely to be very small. The fairness of the SOTs may also be viewed from the broader perspective of the entire tax system. In this context, whether the SOTs are regressive depends on the income of those paying the tax relative to those not paying the tax. The taxes may be judged regressive if they are paid by individuals who tend to have lower incomes. The SOTs also have been criticized as inequitable because of allegedly low rates of compliance. Another commonly used criterion of fairness is that a tax should provide equal treatment of individuals with equal ability to pay. Noncompliance can create inequity because people with equal ability to pay and equal tax liability end up paying different amounts. Some critics of SOTs believe that the nonpayment of tax by some alcohol dealers puts the compliant dealers at a competitive disadvantage. Some industry representatives believe that compliance among retailers may be low relative to producers and wholesalers because retailers are unaware of their tax liability and because ATF has difficulty identifying retailers who have not paid the SOTs. Critics claim that other features of the SOTs, besides the fixed amount per location and low compliance, impose unfair tax burdens. Establishments open on a seasonal basis, such as marinas and campgrounds, have shortened sales periods but still pay the annual tax. As previously described, a standard for judging the fairness of a tax requires that the tax be related to the taxpayer’s ability to pay. The SOT requirement that seasonal establishments pay the annual rate may make the taxes more regressive if the shortened sales period results in less income and the seasonal dealers bear the same tax burden as year-long dealers. Critics also claim that the SOTs are unfair because retailers who are unaware that they owe the tax can face substantial, accumulated tax, penalties, and interest if they have been in operation for several years without filing returns and paying the taxes. Generally, the IRC limits the period during which the SOTs and other taxes can be assessed to 3 years from the date the tax return was filed. The purpose of this provision is to limit the taxpayers’ compliance costs of keeping and maintaining records. However, the IRC contains exceptions to this limitation that permit assessments at any time if, for example, the taxpayer fails to file a return or files a false return with the intent to evade taxes. This statute of limitations with its exceptions applies to taxpayers who owe income taxes and other taxes as well as those who owe the SOTs. An evaluation of the fairness and effectiveness of these general provisions, and any need to modify the provisions, is beyond the scope of this report. We discussed a draft of this report on July 9, 1998, with the Director, Office of Tax Analysis, and other officials from the Office of the Assistant Secretary of the Treasury for Tax Policy and with the Deputy Assistant Director, Alcohol and Tobacco, and other ATF officials. In addition, ATF provided written comments. ATF’s comments clarified its enforcement practices, its definition of compliance, and its methods for measuring compliance. ATF officials also provided additional data on the numbers of alcohol business entities that filed timely returns and timely paid the taxes due. ATF and Treasury officials made other comments to improve the clarity of our presentation. We have incorporated the comments from ATF and Treasury officials into this report where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies to the Chairmen and Ranking Minority Members of the House Committee on Ways and Means and the Senate Committee on Finance; various other congressional committees; the Secretary of the Treasury; the Director of the Bureau of Alcohol, Tobacco, and Firearms; and other interested parties. We will also make copies available to others upon request. Major contributors to this report are listed in appendix III. Please contact me on (202) 512-9110 if you have any questions. In May of each year, the Bureau of Alcohol, Tobacco, and Firearms (ATF) sends a notification package to each alcohol business known to ATF. This begins the annual process for businesses to renew the registration of their alcohol operating locations with ATF and to obtain the Special Tax Stamps. The SOT notification package contains the special tax renewal registration and return, a notification letter, and a preaddressed return envelope. ATF preprints on the SOT return the business’ name, identification number, registered address, operating locations, and taxes due. ATF instructs the taxpayer to verify the preprinted information on the return, correct any errors, sign and date the taxpayer certification at the bottom of the return, and submit the payment. The taxpayer can submit the SOT return with the appropriate payment or report that the alcohol business is no longer in operation. After the taxpayer files the SOT return and pays the taxes, ATF issues a Special Tax Stamp, ATF Form 5630.6A, as evidence of tax payment for each location. The special stamp is nontransferable and is printed with the principal business address and the physical address of the operating business location for which the stamp was issued. Alcohol businesses are required to keep these location-specific stamps available for inspection by ATF. ATF uses unique business location numbers to account for all known operating and out-of-business locations for each principal business. If the taxpayer fails to register the alcohol business with ATF and pay the taxes due or to report that the business is no longer in operation, the Bureau considers the taxpayer to be noncompliant and sends a follow-up inquiry letter to the taxpayer. This letter informs the taxpayer of the new total amount due, which includes the occupational taxes, failure-to-file penalty, failure-to-pay penalty, and interest. The taxpayer is advised to pay the new total due within 10 days of the letter to avoid additional penalties and interest. The letter contains an explanation of the taxpayer’s appeal rights and a telephone number the taxpayer may call for assistance. The taxpayer is advised that failure to respond to the letter could result in assessment proceedings against the taxpayer. ATF advises the taxpayers in the renewal notification process that they may incur (1) failure-to-file and failure-to-pay penalties and (2) interest if they are liable for the SOT and do not pay or file in a timely fashion. The failure-to-file penalty is 5 percent of the tax liability for the first month late, plus an additional 5 percent for each additional month or part of the month. The maximum failure-to-file penalty is 25 percent of the taxes due. The failure-to-pay penalty is 0.5 percent of the taxes due for the first month late and 0.5 percent for each additional month or part of a month. The total failure-to-pay penalty also cannot exceed 25 percent of the tax due. If both the failure-to-file and failure-to-pay penalties are assessed, the total amount of these combined penalties cannot exceed 25 percent of the tax due. However, if failure to file a return is due to fraud, the penalty is 15 percent, not to exceed 75 percent. Unlike the penalty amounts, which are limited, there is no limit on the interest charges taxpayers may incur for unpaid SOTs. Interest amounts are computed beginning with the first day of delinquency, using compound interest rates. Because of the exceptions to the statute of limitations on the assessment and collection of the SOTs, in some cases, the total amount of interest due can be substantial for a taxpayer who has not filed a return for several years. SOT revenue data show that ATF has assessed and collected failure-to-file penalties, failure-to-pay penalties, and interest for the occupational taxes. ATF can accept offers-in-compromise or installment agreements or waive penalties if the taxpayer can show that the failure to file or failure to pay is due to reasonable cause and not willful neglect or gross negligence. If a taxpayer exercised ordinary business care and prudence and still was unable to file within the required time, the failure would be due to reasonable cause. ATF may consider that a failure to pay was due to reasonable cause if the taxpayer demonstrated ordinary business care and prudence in providing funds for payment of the tax liability but still was unable to pay or would endure undue hardship if the tax were paid on the due date. ATF does not consider ignorance of the law a reasonable cause. James Wozny, Assistant Director, Tax Policy and Administration Issues Helen D. Branch, Evaluator-in-Charge Kevin Daly, Senior Economist The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO studied the alcohol special occupational taxes (SOT), focusing on: (1) the methods that the Bureau of Alcohol, Tobacco, and Firearms (ATF) uses to enforce compliance with the taxes and the costs incurred in these efforts; (2) compliance rates for alcohol producers, wholesalers, and retailers; and (3) arguments that have been made for and against these occupational taxes. GAO noted that: (1) ATF uses a variety of methods to enforce compliance with the alcohol SOTs; (2) among other information preprinted on the special tax renewal registration and return, ATF lists each known operating location and the total amount of taxes due; (3) ATF also informs the public about alcohol occupational tax requirements using a variety of media; (4) all but five states routinely provide retailer licensing information that ATF can compare with federal records to identify retailers who may not be in compliance; (5) ATF has assessed civil and criminal penalties, as well as interest, to enforce compliance with the SOT provisions; (6) ATF estimated that it cost a total of $1.9 million to administer the SOT programs for alcohol, tobacco, and firearm businesses in fiscal year 1997; (7) ATF and the audit staff at the Department of the Treasury's Office of the Inspector General (IG) have estimated rates of taxpayer compliance with the alcohol SOTs; (8) however, the two offices used different data, methods, and definitions of compliance to make their estimates; (9) ATF estimated that, as of April 3, 1998, 93 percent of the producers and 95 percent of the wholesalers with federal permits and 89 percent of the retailers known to ATF were compliant for tax year 1998; (10) IG estimated the average compliance rate for retailers over tax years 1993, 1994, and 1995 to be 83 percent; (11) supporters of the alcohol SOTs have justified the taxes both as a general source of revenue and as providing revenues to offset the costs to the government of regulating the industry; (12) however, the SOTs are not likely to accurately reflect the current costs of regulation because the tax rates have rarely been changed; (13) the SOTs give ATF the authority to enter the premises of alcohol dealers and require that retailers keep certain records; (14) ATF believes that the access and recordkeeping authority provided by the SOTs is necessary for its efforts to control the alcohol distribution system, prevent illegal sales of alcohol, and enforce other federal taxes on alcohol; (15) the SOTs have been criticized in the past because of relatively high administrative costs and low compliance rates among retailers; (16) opponents of the SOTs have criticized the taxes for being unfair; and (17) because the SOTs are a fixed amount per location, the SOTs may take more income from those with less ability to pay the tax, and, if compliance is low, compliant taxpayers may bear an unfair share of the tax burden.
You are an expert at summarizing long articles. Proceed to summarize the following text: Treasury has issued savings bonds since 1935. Savings bonds offer investors the ability to purchase securities with lower minimum denominations than those for marketable Treasury securities. When individuals purchase savings bonds, they loan the amount they paid for the bonds to the U.S. government. Over a period of time (up to 30 years), the savings bonds earn interest and, after 12 months of their original purchase, can be cashed in for their purchase price, plus the interest they have earned, subject to a 3-month interest penalty during the first 5 years. Over the years, Treasury has offered a number of savings bonds with different terms and interest rates. Currently, Treasury offers Series EE bonds, which have a fixed interest rate, and Series I bonds, which pay an interest rate that is tied to inflation. Savings bonds do not represent a major source of funds for the Treasury. The Bureau of the Fiscal Service, one of Treasury’s 10 bureaus, helps to fund the federal government by selling Treasury securities, including savings bonds. Treasury Securities Services within the bureau operates Treasury’s Retail Securities program, which allows retail investors to purchase savings bonds and marketable securities in electronic form directly from Treasury. The office’s flagship system is TreasuryDirect, an online proprietary system created in 2002 that allows customers to buy and hold savings bonds and marketable securities, and to manage their accounts without assistance from a customer service representative. TreasuryDirect customers can purchase securities at any time, direct electronic payments to bank accounts, and convert paper savings bonds to electronic savings bonds in the same series and with the same issue date. TreasuryDirect customers also can set up payroll deductions and automatically recurring purchases. As of March 2015, TreasuryDirect had around 580,700 accounts that were funded and held nearly $27 billion. The elimination of paper savings bonds reduced program costs but made purchasing bonds more difficult for some savers. However, our analysis of Treasury’s bond data showed that the drop in bond purchases after the elimination of paper savings bonds was not statistically significant. As shown in figure 1, annual purchases of U.S. savings bonds declined significantly from 2001 through 2013, falling from around $14.6 billion to less than $1 billion, or by more than 90 percent.savings bond purchases declined every year, except from 2002 to 2003. Likewise, the role of savings bonds in helping to fund the federal debt also declined over the period, accounting for about 3.2 percent of the federal debt in 2001 and about 1.0 percent in 2013. Following the long-term decline in savings bond purchases, Treasury stopped selling paper savings bonds through over-the-counter channels, including through financial institutions and mail-in orders, on January 1, 2012, as part of its agency-wide electronic initiative to reduce program costs and improve customer service. According to Treasury officials, the agency phased out the issuance of paper savings bonds through employer-sponsored payroll savings plans in 2010, and the ending of savings bond sales through over-the-counter channels was the last step of discontinuing paper savings bonds. Treasury estimated that the elimination of over-the-counter sales of paper savings bonds would save nearly $70 million in program costs from 2012 through 2016. Treasury calculated these savings by estimating how much it would save in costs associated with issuing new paper bonds and servicing and redeeming existing paper bonds, which include fees paid to banks, postage, and printing. For example, Treasury estimated that the change would eliminate around $14.5 million in fees paid to financial institutions for issuing and redeeming savings bonds and around $12.7 million in postage expenses for mailing paper bonds to customers over the 5-year period. Additionally, Treasury estimated that it would save in personnel costs because fewer employees would be needed to process customer service transactions. According to Treasury’s estimates, the change would save around $4.9 million in compensation and benefit costs for Treasury staff and $28.5 million in Federal Reserve Bank personnel costs over the 5-year period. Finally, Treasury estimated $9 million in savings from reductions in paper stock, overhead, forms, and other costs. In addition to the cost savings, Treasury expected the change to provide customer benefits, such as increased security and convenience. Although paper bonds allowed buyers to purchase savings bonds at financial institutions, Treasury’s online system for purchasing savings bonds and other Treasury securities—TreasuryDirect—allows customers to buy, manage, and redeem savings bonds electronically at any time. Treasury officials told us that electronic bonds are safer and more secure, because paper bonds could be lost, stolen, altered, or fraudulently redeemed. Treasury officials also added that electronic bonds provide the agency with both operational advantages and enhanced customer experience, since Treasury can automatically track bond purchases, redemptions, and values for the customer. When Treasury eliminated paper savings bonds, it created access challenges for bond buyers who do not have a bank account and Internet access. Customers now must use TreasuryDirect to purchase electronic savings bonds, although some can purchase paper savings bonds through the Tax Time program, which we discuss later in this report. To open a TreasuryDirect account, a customer generally must have both Internet access and a bank account. While TreasuryDirect can be accessed through cellular phones and other mobile devices, the website is not optimized for such use. According to representatives from a nonprofit organization that focuses on savings for lower-income households, mobile access is the primary means of Internet access for some lower-income consumers. According to 2011 Census Bureau data, around 50 percent of households with less than $25,000 in income did not have computer-based Internet access from some location.according to the 2013 Federal Deposit Insurance Corporation’s (FDIC) National Survey of Unbanked and Underbanked Households, 7.7 percent of U.S. households, or nearly 9.6 million households, were unbanked— Further, that is, they did not have a bank account at an insured institution.result, such households or individuals may not be able to access TreasuryDirect or complete a transaction if they wanted to buy savings bonds. Treasury officials recognized the access challenges related to TreasuryDirect that some potential users might face, but told us such challenges could be mitigated. Treasury officials said that they worked with organizations that provided Internet access to the public, such as libraries and community centers, and determined that such organizations provide the level of Internet access required for potential TreasuryDirect users. The officials also told us that in lieu of a bank account, individuals could use reloadable debit cards to purchase and redeem savings bonds through TreasuryDirect. While the use of such cards provides an avenue for those without a traditional bank account to purchase savings bonds, Treasury estimated that few savings bonds, approximately 1,426, had been purchased using prepaid debit cards from mid-April 2005 through mid-November 2014. Further, Treasury officials told us that unbanked individuals could use the Tax Time program to purchase paper savings bonds. Our analysis of IRS data on the Tax Time program indicates that around 91 percent of tax filers who used part of their tax refund to purchase paper savings bonds had part of their refund directly deposited into a bank account. Similarly, based on data from SCF surveys from 2001 through 2010, over 90 percent of households who owned savings bonds have bank accounts. Additionally, according to FDIC’s survey, more than 90 percent of all households the agency surveyed had a bank account. According to Treasury officials and representatives from several nonprofit organizations that we interviewed, TreasuryDirect also poses some usability challenges. For example, Treasury officials and nonprofit representatives told us that giving savings bonds as a gift through TreasuryDirect can be a cumbersome process. They explained that TreasuryDirect requires the individual buying the savings bond to have the Social Security number and TreasuryDirect account number of the recipient of the gift bond, information the individual may not know. The gifting process also requires the recipients or their parents or guardians to set up a TreasuryDirect account, if they do not have one. Treasury officials told us that issues associated with the process of buying bonds as gifts were the source of the most common complaints from customers about savings bond transactions through TreasuryDirect. In addition, representatives from nonprofit organizations and an academic we interviewed told us that TreasuryDirect generally was not a user-friendly system, even for individuals who were comfortable using the Internet for their financial transactions. They told us that navigating the system was not easy and could pose challenges to potential customers who were not familiar with online financial transactions. Similarly, Treasury officials told us that customers anecdotally had expressed concerns about difficult navigation, lengthy application pages, organization of information, security features, complicated linked accounts processes, and difficulty locating tax reporting information. When Treasury eliminated paper savings bonds in January 2012, there were nearly 379,000 total funded TreasuryDirect accounts.580,000 total funded TreasuryDirect accounts, but the extent to which the increase resulted from savings bond investors has not been determined. Our analyses of Treasury savings bond data indicated that the decline in savings bond purchases after Treasury discontinued the sale of paper savings bonds in January 2012 was consistent with the overall long-term decline in savings bond purchases. In addition, the decline since January 2012 generally was not statistically significant based on models we estimated. While there was a large decline in purchases in 2012 and 2013 when sales of paper savings bonds were discontinued, there are a number of factors that could account for this decline. For example, savings bond purchases declined in 9 out of 10 years from 2002 to 2011, and some declines were quite large, hence recent declines in purchases may be reflective of long-term trends. In addition, we found that savings bond purchases have been sensitive to interest rate changes, with savers typically purchasing more when interest rates are higher and purchasing less when they are lower. The low interest rates in recent years may account for some of the decline in savings bond purchases. Although lower-income households that do not have bank accounts or Internet access could face challenges accessing or using TreasuryDirect, this challenge may only affect a small percentage of such households. Our analyses indicate that a small percentage of such households buy savings bonds in general, even when they were available in paper form. According to data from the 2013 SCF survey, 4.6 percent of lower-income households held savings bonds in 2013, and this percentage had declined from 7.7 percent in 2001. In a July 2014 Federal Register release, and in support of its strategy to reach new customers, develop new product delivery streams, and increase the number of available product offerings, Treasury released its plans to introduce the Treasury Retail Investment Manager (TRIM), which According to Treasury officials, TRIM will be will replace TreasuryDirect.more flexible and responsive to changing business and digital investing needs. Treasury officials told us that they plan to offer mobile phone access through TRIM, which could improve access for households that do not have computer-based Internet access at home. Treasury officials also told us that TRIM would attempt to address a number of TreasuryDirect’s usability challenges. For example, Treasury officials told us that the TRIM system should be more user friendly for customers, because it will have an online interface that is similar to the online interfaces that banks and stock brokers offer and with which most customers are likely familiar. The system also is expected to streamline various steps for customers navigating the system—for example when they open or sign into accounts—to improve usability and potentially save Treasury money by reducing calls to customer service. According to Treasury officials, they also are exploring ways for TRIM to simplify the process for buying savings bonds as gifts and to allow for multiple funding options. One option under consideration is for a customer to buy a savings bond gift certificate that can be given to another individual, who can go online to open a TRIM account and use the certificate to buy the savings bond directly. Treasury also is exploring multiple funding options for customer accounts to provide options to savers who do not have bank accounts. As of May 2015, TRIM was under development, and Treasury officials told us that its release date had not been set. According to Treasury officials, TRIM is being developed in four phases—initiation, planning, execution, and closing. Treasury officials told us that TRIM was in the planning phase and that the system’s design was being developed. Specifically, Treasury officials are working on defining technical requirements for the system. Before TRIM can be implemented, Treasury will need to complete the execution and closing phases, which include technical design, system coding, various testing, consumer education, and system documentation. Treasury officials told us that they did not have a specific release date for TRIM, which will depend on the time needed to complete the next steps in the project plan. According to a Treasury estimate issued in 2013, TRIM was expected to cost around $18 million to develop and implement. Treasury officials told us that, as of May 2015, they did not have any changes to this estimate and that the costs they had incurred thus far had been consistent with the estimate. They also told us that Treasury had tentative plans to develop an implementation plan for TRIM by April 2016. Since 2010, U.S. tax filers have used the Tax Time program to save by using their tax refund to purchase paper savings bonds. For example, about 55,000 tax filers with adjusted gross incomes of $25,000 or less participated in the program for tax years 2010 through 2013 and bought about $13.7 million in savings bonds. Treasury has been extending the program annually in consideration of some of the program’s benefits, but not in consideration of the program’s costs. Since 2010, U.S. tax filers have been able to use their tax refund to purchase paper savings bonds through the Tax Time Savings Bond program. In 2009, President Obama proposed a package of initiatives to spur increased savings that included a provision for purchasing savings bonds with tax returns. Under the Tax Time program, tax filers receiving a tax refund may use an IRS form to allocate their refund among several options, such as purchasing paper savings bonds or depositing their refund directly into their bank account. As shown in table 1, in tax years 2010 through 2013 about 142,000 total tax filers used the Tax Time program to buy a total of about $72.5 million in paper savings bonds.(According to data provided by Treasury, of the 142,000 total tax filers that used the Tax Time program, about 20 percent were repeat participants in the program). These filers purchased, on average, approximately $500 in paper savings bonds each year. Table 1 also shows that about 55,000 tax filers with an adjusted gross income of $25,000 or less collectively bought about $13.7 million in paper savings bonds. These filers purchased, on average, approximately $250 in paper savings bonds each year. At the same time, the number of tax filers participating in the Tax Time program and the amount of savings bonds purchased under the program were relatively small. The total number of tax filers receiving a refund for tax years 2010 through 2013 was more than 100 million in each year, and Tax Time participants made up less than 1 percent of this group. Similarly, the amount of savings bonds purchased through the program from 2010 through 2013 accounted for about 1 percent of the total amount of all savings bonds purchased during those years. About 30 percent of Tax Time program participants also were tax filers who received the Earned Income Tax Credit. Enacted by Congress in 1975, the Earned Income Tax Credit is one of the largest antipoverty programs. Generally, income and family size determine a taxpayer’s eligibility, and the credit is a refundable tax credit for low-to-moderate income working individuals and couples—particularly those with children. As shown in table 2, about 30 percent of tax filers participating in the program from 2010 through 2013 received the Earned Income Tax Credit. According to representatives from three nonprofit organizations and two academics we interviewed, tax season provides an opportunity for tax filers receiving a refund to set aside an amount of money specifically for savings. They told us that tax season was often the one time during the year that tax filers—particularly those with low incomes—had a relatively large lump sum of money available to save. However, in some instances, tax filers receiving a refund may already know what they plan to use their refunds for, and that may not include any savings. Treasury has been extending the Tax Time program on an annual basis and plans to continue extending it in the short term. According to Treasury officials, the program was scheduled to expire after the 2015 tax season, in which case tax filers would no longer have had the option to use the IRS form to purchase paper savings bonds. However, Treasury officials told us that the agency decided in December 2014 to extend the program through the 2016 tax season. The decision was made by the Fiscal Assistant Secretary of the Treasury based on an internal recommendation from the Commissioner of the Bureau of the Fiscal Service, which oversees the savings bond program. Treasury officials said that they intended to continue recommending the continuation of paper tax-time bonds until a suitable electronic alternative is implemented. However, Treasury officials did not provide us with any additional information on how an electronic alternative would replace the option of purchasing paper savings bonds. For participants who do not have Internet access or want to buy bonds electronically, it is not clear what a suitable electronic alternative would be. Although Treasury has been extending the Tax Time program on an annual basis, it has not assessed the program’s costs along with its benefits. In deciding to extend the program in the last 2 years, Treasury officials told us that they considered participation levels and the amount of savings bonds purchased through the program. Such data indicate some of the program’s benefits, namely its ability to promote savings by lower- income and other households. While the amount of bonds purchased and program participation levels can be quantified, other benefits of the program, such as providing a savings opportunity for lower-income households that may not be able to access TreasuryDirect to purchase savings bonds online, are more difficult to quantify. Although Treasury officials considered some of the Tax Time program’s benefits in deciding to extend it, they generally did not consider the program’s costs in their decision-making process. According to Treasury and IRS officials, Treasury has not conducted an analysis on the current costs of the program or determined how much Treasury would save if the program were allowed to expire after the 2016 tax season. IRS officials told us that IRS’s current costs to administer the program were minimal, because IRS largely processes the forms electronically. Treasury officials told us that its current cost of printing and mailing a paper savings bond was approximately 17 cents, but this estimate did not include the share of the overhead, system, and other costs attributable to paper savings bonds. Moreover, the 17 cent estimate also did not include any cost that IRS incurred for its role in implementing the program. In prior work on agency stewardship of public funds, we reported that properly estimating program costs is necessary for several reasons and that comparing these costs to the program’s benefits to evaluate alternatives related to program decisions is a best practice. Producing cost estimates is important for evaluating resources and making decisions about programs at key decision points. Credible cost estimates also help support funding decisions for an agency’s programs. Comparing these costs to the benefits in order to consider all alternatives for a program ensures linkage between the alternatives. In deciding to extend the Tax Time program, Treasury has considered some of the program’s benefits but generally not the program’s costs, both of which are needed to evaluate program performance and alternatives. As discussed, Treasury has previously considered levels of program participation and amounts of savings bonds purchased by participants in its decisions, and most recently has extended the program until a suitable electronic alternative is available. Consideration of not only the Tax Time program’s benefits but also the program’s cost would provide Treasury with important information in evaluating not only the resource requirements when deciding whether to allow the program to expire but also the program’s performance in relation to its benefits and costs. For example, if the program’s operating costs are minimal, then the program’s benefits may outweigh its costs, such as providing opportunities for lower-income households to save. Conversely, if program costs are significant, those costs might outweigh the program’s benefits in light of the number of tax filers using the program and the availability of an electronic alternative. However, by not having full, reliable, estimates of the cost of the Tax Time program to compare to the benefits, Treasury’s ability to make a fully informed decision is limited. GAO found that lower-income households save relatively small amounts and face a number of savings challenges that result, in part, from limited access to financial institutions and products. According to several academics and nonprofits we interviewed, savings and other asset- building programs are fundamental building blocks for helping lower income-households achieve economic mobility and security. Savings provide a buffer against unexpected events and a means to move up the economic ladder through investments, such as by buying a home, paying for college, starting a business, or saving for retirement. In addition to the Tax Time program, discussed above, federal, state, and local agencies as well as nonprofits have developed a number of programs aimed at assisting lower-income households to save and build assets. These programs include providing financial literacy and education services, and range from promoting short-term financial goals, such as emergency savings, to long-term financial goals, such as saving for retirement. According to 2013 SCF data, lower-income households have limited savings in bank accounts and other financial assets. the lowest income quintile (or bottom fifth) had a median income of around $14,200 in 2013, and households in the next income quintile had a median income of around $28,400. As shown in table 3, 82 percent and 93 percent of the U.S. households in the bottom two income quintiles had financial assets, but the median value of these financial assets were $550 and $3,064, respectively. In other words, half of the households in the lowest income quintile held $550 or less in financial assets. In comparison, the median value for financial assets for all surveyed households in 2013 was $17,580. Bank accounts are the mostly widely held financial asset among lower-income households, according to 2013 SCF data. However, separate from bank accounts, a significant majority of lower-income households hold few or no other financial assets, such as stocks, bonds, or mutual funds. For example, 9 percent of U.S. Financial assets in SCF include bank accounts, certificates of deposit, savings bonds, bonds, stocks, mutual funds, retirement accounts, and cash value life insurance. households in the bottom income quintile have retirement accounts, compared with around 28 percent of households in the next lowest income quintile. As shown in figure 3, median household financial assets, excluding retirement accounts, dropped in the wake of the 2001 and 2008 recessions and have not recovered to pre-recession levels. Median holdings in 2013 were down by 40 percent or more in comparison to median holdings in 2001 for both the population as a whole and for lower- income households.assets for the two lowest income quintiles was $1,000 in 2013. This total reflects the relatively low level of short-term savings for these households. Since at least 2003, the federal government has played a broad role in promoting financial literacy, which encompasses financial education—the process by which individuals improve their knowledge and understanding of financial products, services, and concepts. Financial literacy plays an important role in helping to promote the financial health and stability of individuals and families. In prior work on financial literacy, we reported that federal agencies have made progress in recent years in coordinating their financial literacy activities and collaborating with nonfederal entities, in large part due to the efforts of the federal multiagency Financial Literacy and Education Commission (FLEC). In addition to their financial literacy efforts, some federal agencies have developed savings programs involving financial assets. These programs are aimed at helping households and individuals that may not have access to traditional savings vehicles, such as employer-sponsored retirement plans. According to a Treasury official, Treasury launched the myRA program, which is in a soft-launch phase, to promote retirement savings among individuals without access to employer-sponsored retirement plans.According to Treasury, the program offers a retirement savings account that is a Roth IRA, so it follows the same rules that apply generally to Roth IRAs and receives the same tax treatment.no minimum-amount requirement, a maximum balance of $15,000, and it can be funded through payroll direct deposit. The account houses a savings bond that will never go down in value (except from withdrawals) and the security in the account, like other Treasury securities, is backed by the U.S. Treasury. Participating employers make myRA information available to their employees. Employees are able to enroll in the program, and then elect to have a portion of each paycheck directly deposited into their myRA automatically. A myRA has no fees, Treasury officials stated that they worked to develop the framework for this program in 2014, including issuing a new Treasury security to serve as the investment option for these accounts, and designing easy-to- understand materials for savers. Treasury continued to build on the development process by making myRA available to a small group of employers, including federal agencies. Presently, Treasury is working closely with this small group of participants to get feedback and better ensure that the user experience is as simple and straightforward as possible–both for employers and employees–before myRA becomes more broadly available later this year. Treasury has indicated that it is too early to begin evaluating the impact of the myRA program. However, Treasury officials told us that they will continue to monitor the progress of the program as it moves through its soft-launch phase. Given the challenges low-and-moderate income households face in obtaining financial or banking services, FDIC has created a number of initiatives to help low and moderate-income individuals improve their financial skills and use financial institutions according to FDIC officials. For example, FDIC officials stated that, in 2001, FDIC developed the Money Smart program, which is a comprehensive financial education curriculum designed to help consumers, especially low- and- moderate income consumers and entrepreneurs, enhance their financial skills and help create positive banking relationships. Officials added that FDIC provides the curriculum free of charge in formats for consumers to complete on their own or through instructor-led classes. According to FDIC, the program has reached over 2.75 million consumers since 2001. In April 2007, FDIC used a three-part survey to determine the effectiveness of its Money Smart financial education curriculum and found that the program positively influenced how course participants managed their finances and their financial confidence. The study also found that these positive changes were sustained months after participants had completed Money Smart training. Specifically, the study found that participants were more likely to open deposit accounts, save money in a mainstream deposit product, use and adhere to a budget, and demonstrate increased confidence in their financial abilities when they were contacted 6 to 12 months after completing the Money Smart course compared to before beginning the course. To further promote low and moderate income consumers’ access to financial services, FDIC developed the Model Safe Accounts Pilot in January 2011. The pilot was designed to evaluate the feasibility of having financial institutions offer safe, low-cost transaction and savings accounts (Safe Accounts) that are responsive to the needs of underserved consumers- including those with low and moderate incomes. Nine financial institutions participated in the pilot by offering Safe Accounts, which are checkless, card-based electronic accounts that limit acquisition and maintenance costs. These accounts allow withdrawals only through automated teller machines, point-of-sale terminals, automated clearinghouse pre-authorizations, and other automated means. Overdraft and nonsufficient funds fees are prohibited with the transaction accounts. According to FDIC, the nine banks opened more than 3,500 Safe Accounts during the pilot. Retention of these accounts exceeded expectations—more than 80 percent of transaction accounts and 95 percent of savings accounts remained open at the end of the 1-year pilot period. According to FDIC, Safe Accounts performed on par with or better than other transaction and savings accounts and several of the banks plan to continue to offer Safe Accounts—some banks are also considering the possibility of graduating pilot accountholders to traditional deposit accounts. Although the Safe Accounts program was only a 1-year pilot, FDIC officials told us that the agency provides interested FDIC insured institutions with a Safe Accounts template that includes guidelines for offering cost-effective transactional and savings accounts to underserved consumers. This template was based, in part, on lessons learned during the pilot phase. FDIC announced its Youth Savings Pilot Program on August 4, 2014. According to FDIC, this pilot program seeks to identify and highlight promising approaches to offering financial education tied to the opening of safe, low-cost savings accounts for school-aged children. The pilot has two phases. According to FDIC officials, Phase I includes FDIC insured institutions currently working with schools or nonprofit organizations that help students open savings accounts in conjunction with financial education programs during the 2014 to 2015 and 2015 to 2016 school years. Nine banks differing in size, location, and business models were selected for the first phase. The officials added that Phase II will include FDIC insured institutions beginning or expanding youth savings account programs during the 2015 to 2016 school year. FDIC is collecting summary information—including data on the number of accounts opened and financial education approaches used—from pilot participants. When the pilot is complete, FDIC intends to publish a report to provide financial institutions with promising approaches to working with schools and other organizations to combine financial education with access to a savings account. The Office of Community Services at the Department of Health and Human Services’ Administration for Children and Families administers the Assets for Independence program. Started in 1998, the Assets for Independence program awards grants to community-based entities, nonprofits and state, local, and tribal government agencies that partner with nonprofits to implement an asset-based approach for assisting low income families to become economically self-sufficient according to the Administration for Children and Families. According to agency officials, entities receiving these grants enroll participants in Assets for Independence projects to save earned income in special-purpose, matched savings accounts, also called individual development accounts. According to agency officials, every dollar that a participant deposits into an Assets for Independence individual development account is matched by the Assets for Independence project. Match rates can vary from $1 in match funds for every $1 the participant deposits in his or her individual development account, to as much as $8 in match funds for every $1 saved. Participants generally must use their individual development accounts and matching funds for a qualified expense: the purchase of a home; the capitalization or expansion of a business; or post-secondary educational expenses. According to agency officials, under the program, grantees are required to assist participants in the demonstration project in obtaining the skills necessary to achieve economic self-sufficiency. Examples of such activities include providing financial education and credit counseling. As illustrated in table 4, from 2010 through 2014, according to agency officials the Administration for Children and Families awarded 269 Assets for Independence grants and over $62 million to a number of organizations including nonprofits, state or local governments, tribal governments, and community development financial institutions, to name a few. Table 4 also shows that the program budget for the Administration for Children and Families since fiscal year 2010. According to Administration for Children and Families data through fiscal year 2010, more than 90 percent of Assets for Independence projects allowed participants to pursue homeownership as an asset goal, while more than 80 percent allowed participants to pursue postsecondary education or training and business capitalization as asset goals. Nearly one-third of projects allowed participants to transfer account savings to the individual development account of a spouse or dependent. In 2011, Administration for Children and Families began a random assignment evaluation of the Assets for Independence program at two grantee sites. This evaluation will assess the impact of Assets for Independence program participation on savings, savings patterns, and asset purchase by lower-income individuals and families. It builds on the previous quasi-experimental evaluation and studies of other non-Assets for Independence funded individual development account projects. The 2008 evaluation used data from the early to mid-2000s and found that Assets for Independence program participants were 35 percent more likely to become homeowners, 84 percent more likely to become business owners, and nearly twice as likely to pursue post-secondary education or training compared with a corresponding national sample of nonparticipants eligible for the program. According to the Administration for Children and Families, the random assignment evaluation will further understanding of the program’s overall impact on early participant outcomes. The evaluation team completed participant enrollment and baseline data collection in July 2014 and expects to release its final report in early 2016. The Department of Housing and Urban Development (HUD) awards competitive grants to public housing agencies for the administration of programs that encourage residents of public housing to attain self- sufficiency through programs such as the Family Self Sufficiency program. The program funds coordinators who help participants achieve employment goals and accumulate assets. Through the coordination and linkage to local service providers, program participants receive training and counseling that enables them to increase their earned income and decrease or reduce their need for rental assistance. Under the Family Self Sufficiency program, escrow accounts are used as incentives to increase work effort and earnings. Specifically, when participants have to pay a higher rent after their earned income increases, the public housing agency calculates an escrow credit that is deposited each month into an interest-bearing account (see fig. 4). Families that successfully complete their contract for the Family Self Sufficiency program receive their accrued escrow funds. According to HUD officials, over 72,000 households participated in the program in fiscal year 2014, and 4,382 families successfully completed their Family Self Sufficiency contracts. The 2013, 2014 and 2015 appropriation amounts for the Family Self Sufficiency program was $75 million. HUD is requesting $85 million in 2016. In September 2004, HUD commissioned a 5-year prospective study of the Family Self Sufficiency program, focusing on programs serving Housing Choice Voucher recipients. The study provided a final assessment of the experiences of a representative sample of Family Self Sufficiency participants that enrolled in 2005 and 2006. The study also examined the relationship between participants’ characteristics, Family Self Sufficiency programmatic features, and program outcomes. The study found that after 4 years in the Family Self Sufficiency program, 24 percent of the study participants completed program requirements and graduated. When the study ended, 37 percent had left the program without graduating and 39 percent were still enrolled in the Family Self Sufficiency program. Program graduates were more likely to be employed than participants who did not graduate or who still were enrolled in the program. Program graduates also had higher incomes, both when they enrolled in the Family Self Sufficiency program and when they completed the program, than participants with other outcomes. Staying employed and increasing their earned incomes helped graduates to accumulate substantial savings in the Family Self Sufficiency escrow account. The average escrow account balance was $5,294 for program graduates, representing about 27 percent of their average household income at the time of program enrollment. Recognizing that financial literacy or education is only part of the solution to help lower-income households achieve financial security, state and local government agencies and nonprofits have developed a variety of programs targeting specific populations or serving a specific savings purpose. These include retirement savings programs, prize-linked savings programs, short-term emergency savings programs, and various asset building (or asset accumulation) programs that promote savings for specific goals (e.g., post-secondary education, home ownership, or business ownership). Several states have created prize-linked savings programs to offer a new way to help lower-income and other individuals to save. As of 2015, Michigan, Nebraska, North Carolina, and Washington have created Save to Win programs, in which participating credit unions offer their members the opportunity to open prized-linked savings accounts. A Save to Win account is designed as a 12-month share certificate that allows for unlimited deposits throughout the year. Savers are required to deposit only $25 to open an account and earn raffle tickets for every additional $25 deposited in the account, with a cap on the number of entries per month. The cap helps ensure that individuals who cannot save as much still have opportunities to win. Raffle tickets qualify participants for the chance to win monthly cash prizes and grand prizes at the end of the year. According to Doorways to Dreams Fund, since the launch of Save to Win in 2009, over 50,000 accounts have been opened with over $94 million in savings in Michigan. Moreover, the nonprofit reported that among surveyed Save to Win accountholders, between 62 percent 81 percent were financially vulnerable. Michigan passed a law in 2003 to allow for credit unions to offer “savings promotion raffles.” The other four states also have modified their laws to allow credit unions to offer prize- linked accounts, savings promotion raffles, or other promotional contests of chance. On the federal level, in 2014, Congress passed the American Savings Promotion Act to provide for the use of savings promotion raffle products by financial institutions to encourage savings. According to some nonprofit officials and academics we interviewed, federal and state savings programs primarily promote and provide tax incentives for retirement savings, which tend to benefit higher-income households more than lower-income households. At the same time, they told us that short-term or emergency saving tends to be more important for lower-income households, because it helps households meet their immediate needs—for example, to cover unexpected car repairs, medical expenses, or temporary unemployment. Some government entities and nonprofit organizations have developed pilot and other programs to promote short-term emergency savings. According to program officials, the AutoSave Pilot was a joint initiative of two nonprofits—New America and MDRC. Program officials told us that the pilot tested the feasibility of establishing automatic savings programs that use direct deposit to divert a small amount of after-tax wages into savings accounts. Automatic savings programs would be especially valuable for individuals who have few liquid assets and limited access to low-cost credit products, because these savings can be used as a personal safety net in the event of unanticipated expenses or a sudden decrease in income, according to New America and MDRC. AutoSave investigated two different program designs. The first program design, implemented in fall 2009, was the “opt -in program,” where employees signed up for the AutoSave savings program through their employer. Employees who did not have a savings account were able to open one through a bank or credit union that partnered with the workplace site. With this version of the program design, only the savings deposits were automatic. The opt-in AutoSave program design had been offered to employees at eight workplace sites, ranging in size between 13 and 25,000 employees. The pilot had a special focus on generating participation among low- to moderate-income workers, although all employees were eligible to sign up. Overall participation rates ranged between 2 percent and 62 percent of all employees at these targeted workplaces, with most sites ranging between 9 percent and 25 percent. In sites where wages were tracked, the majority of participants had wage levels within the lower three-fifths of the wage distribution in their workplace. These participation results were consistent with expectations for the opt-in program design. The second investigated program design was an “opt-out program,” where all employees would have been automatically enrolled in the AutoSave savings program unless they elected not to be in the program. With this design, both enrollment and deposits would have been automatic. Opt-out enrollment was not actually piloted because MDRC’s assessment of the legal and operational risks concluded that while this approach would presumably be legal in some states, a lack of regulations or case law addressing the model meant that employers would be taking undue risks to implement the opt-out model. In the absence of such guidance or precedence, MDRC has determined that it is not currently feasible to implement the opt-out enrollment program design (even by using a payroll card with an attached savings product). According to an official at the City of San Francisco, the EARN Starter Account program, developed by the California non-profit EARN, seeks to increase the supply of starter account products that allow unbanked lower-income households to begin saving. Program participants must make at or below 50 percent of their area median income. The EARN Starter Account is an online program that rewards participants for consistently saving at least $20 each month for six months, and participants earn a maximum of $55 in matched funds over the six month period according to the nonprofit. Participants link their existing savings accounts to the EARN Starter Account platform to facilitate savings. If participants make any withdrawals over the 6 months matched funds earned will be forfeited and the account may be closed. At the end of 6 months, participants can claim the funds. Participants can continue using the EARN website for another 6 months. Since 2002, 6,000 EARN clients have saved $6.8 million dollars, and 83 percent of participants have continued to save after their formal program ended, according to a qualitative study by the nonprofit. The study found that consistent savers also demonstrated a shift toward future orientation. More specifically, these program participants were planning to acquire more assets (such as further education, the purchase of a home, or founding or developing a small business). EARN is partnering with the City and County of San Francisco to bring the Starter Account platform to low-income San Franciscans, beginning with a pilot program for public housing residents. Some government entities and nonprofit organizations have developed programs to encourage lower-income households to save part of their income tax refund. According to officials at the Center for Social Development at Washington University in St. Louis, Refund to Savings is a pilot program intended to help lower-income households build savings and increase financial security. Launched in 2012, the pilot is a collaboration among Washington University in St. Louis, Duke University, and Intuit Inc. According to program officials, the program is implemented through a version of Intuit’s tax preparation software that is available for free to lower-income taxpayers and reaches approximately 1.2 million households. The goal of the initiative is to design and test a low-cost scalable intervention that can lead tax filers to save part of their tax refund. Under the pilot, Intuit users are assigned randomly to a treatment or control group. The treatment group uses a version of the software in which they receive prompts to motivate them to save part of their tax refund as emergency savings. In 2013, the pilot tested automatic refund splitting in which the software automatically put part of the tax filer’s refund in a savings account or savings bond. According to officials at the Center for Social Development, tax filers who did not want to split their refund had to select an “I don’t need to save” button to opt out. In 2013, almost 900,000 low- and moderate-income tax filers participated in the pilot, depositing approximately $5.9 million more in savings accounts than they would have without the intervention, according to the Center for Social Development officials. Data generated by program use and refund allocation behavior will be evaluated to determine whether the prompts, saving opportunity, or both increased saving levels compared with the control group, according to the Center for Social Development at Washington University. According to officials at MDRC and New York City’s Office of Financial Empowerment, the SaveUSA program (formerly $aveNYC) is administered by the Mayor’s Fund to Advance New York City and the New York City Center for Economic Opportunity and offers lower- income households an incentive to save a portion of their tax refund. According to program officials, SaveUSA was launched in 2011 in four cities (New York City, Tulsa, San Antonio, and Newark). Participants open a SaveUSA account when they file their taxes. They are required to save at least $200 of their refund for a year, and earn 50 cents for every dollar saved, with a maximum match of $500. According to an April 2014 study of the program by MDRC, nearly two-thirds of SaveUSA participants in 2011 (the program’s first year) qualified for the savings match and received, on average, $191 in savings match dollars. In the second program year, 39 percent of the 2011 SaveUSA sample participated again, and about 27 percent received a savings match according to the MDRC study. The MDRC study found that on average, SaveUSA group members received $96 in savings match dollars in the program’s second year. According to the MDRC study, those who received a savings match in both years appear to have been in a better position to save—they tended to be older, were more likely to have more income, and were more likely to have pledged the maximum amount allowed of $1,000, compared with other SaveUSA group members. In contrast, SaveUSA group members who had especially low incomes or who pledged the minimum amount of $200 were the least likely to ever receive a savings match. Asset building is based on strategies that help households build financial or tangible assets, such as savings, a home, or a business. A number of nonprofits, states, and municipalities have developed programs to help lower-income households build assets through the use of individual development accounts or child development accounts. As discussed, the Office of Community Services at the Administration for Children and Families administers the Assets for Independence program, which awards grants to community-based entities, nonprofits, and government agencies to implement special-purpose, matched savings accounts or individual development accounts. The length of the program, amount of matching dollars provided, allowable uses for savings, and other rules may be different from one program to the next. An example of an individual development account is the Assets for All Alliance program. According to officials at the Opportunity Fund, this individual development account was launched in 1999 by the Opportunity Fund (formerly Lenders for Community Development) in collaboration with the Silicon Valley Community Foundation Center for Venture Philanthropy and several community partners, including a number of nonprofit social service agencies. According to a study published by the Silicon Valley Community Foundation and Lenders for Community Development, the Assets for All Alliance individual development account program is intended to help lower-income families “learn financial management skills and build assets that would help them permanently improve their economic situation.“ Savings by program participants are “matched by philanthropic and government dollars on a two-to-one basis” according to the study. According to the Opportunity Fund, this program has resulted in 1,028 individual development accounts and $2.77 million in total savings towards asset goals. According to officials at the Center for Social Development at Washington University in St. Louis, child development accounts are savings or investment accounts opened as early as birth. The goal of child development accounts is to promote saving and asset building for lifelong development. Child development accounts assets may be used for postsecondary education, homeownership, or enterprise development. In many cases, public and private entities deposit funds into these accounts to supplement savings for the child. Although the goal of child development accounts is long-term savings accumulation, programs differ in design and features. According to the Center for Social Development, enrollment in some states, including Maine and Nevada, is automatic unless parents opt out (opt-out programs). Some other child development accounts are voluntary or opt-in, meaning that parents must enroll their children, often by opening a 529 or bank savings account. For example, the Nevada College Kick Start program automatically deposits $50 into a 529 account for every public school kindergartner in the state according to officials at the Center for Social Development. In 2014, 70,000 students had been enrolled in Kick Start. Officials told us that Maine’s College Challenge is the only statewide universal child development account program in the nation, benefiting all children born in Maine (more than 40,000 children in 2014). The program automatically deposits $500 into a 529 account on the child’s behalf. Both Nevada and Maine’s 529 plans offer savings matches to state residents according to officials at the Center for Social Development. Other examples of child development accounts include those developed by national nonprofits including the Corporation for Enterprise Development and New America. According to New America, some municipalities also have launched their own child development account programs. For example, as New America reports, in San Francisco the Kindergarten to College program was launched in 2011 and opens accounts for every kindergartner in the city’s public schools. Lower income households face a variety of challenges to saving. U.S. savings bonds continue to provide Americans, including those with lower- incomes, with an affordable, safe, and convenient way to save and invest. However, when Treasury ended the over-the-counter sale of paper savings bonds through financial institutions in January 2012, it created challenges for some bond buyers who had to rely on accessing TreasuryDirect to purchase savings bonds online. Treasury has taken steps to develop a more flexible and responsive Internet-based system than TreasuryDirect, but the TRIM system is in the early stages of development. Treasury intends for these changes to address some of the existing access and other challenges associated with TreasuryDirect. Currently, the Tax Time Savings Bond program provides the only means by which individuals can purchase paper savings bonds, but the program’s future is uncertain, because Treasury may discontinue the program when TRIM is implemented. However, the TRIM system still will require Internet access by computer or mobile device, and Tax Time program users who lack Internet access may not be able to save by buying savings bonds at tax time if the program is discontinued. How the benefits and costs of the Tax Time program would compare when Treasury implements TRIM is not known—in part because Treasury generally has considered the program’s benefits but not the program’s costs. Without considering both, Treasury cannot make a fully informed decision on whether to discontinue the Tax Time program when an electronic alternative is available. To help ensure that Treasury can make a fully informed decision on whether to discontinue the Tax Time Savings Bond program as it implements the TRIM system, GAO recommends that the Secretary of the Treasury consider the benefits and costs of the Tax Time program in future decisions on whether to extend the program. We provided a draft of this report to Treasury and IRS for review and comment. In their comment letter, which is reprinted in appendix II, Treasury agreed with GAO’s recommendation and stated that it would conduct a cost-benefit analysis of the Tax Time Savings Bonds program. Treasury also provided technical comments, which we incorporated, as appropriate. We also provided draft excerpts for technical comment to federal and other agencies—including the Departments of Health and Human Services and Housing and Urban Development, FDIC, New York City’s Office of Financial Empowerment, and San Francisco Office of Financial Empowerment—and nonprofit organizations, including the Center for Social Development at Washington University, Doorways to Dreams Fund, MDRC, and Opportunity Fund. These third parties provided technical comments, which we have incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to Treasury, IRS, FDIC, HUD, and the Department of Health and Human Services, interested congressional committees, members, and others. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Cindy Brown Barnes at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our review examines (1) the effect of Treasury’s elimination of paper U.S. savings bonds, including on the savings bond program and bond purchases; (2) the extent to which Treasury’s Tax Time Savings Bond program has promoted savings, particularly by lower-income households, and Treasury’s plans for the program’s future; and (3) the extent to which lower-income households are saving using financial products, and some of the government and nonprofit programs developed to promote savings by lower-income households. For all three objectives, we analyzed various data. First, we used data issued by the Department of the Treasury (Treasury) on the amount of U.S. savings bonds purchased from 2001 through 2013 to analyze trends in savings bond purchases over this period, including the effect of the Treasury’s elimination of paper savings bonds on savings bond purchases. Second, we used data from the triennial Survey of Consumer Finances (SCF) issued by the Board of Governors of the Federal Reserve System for survey years 2001, 2004, 2007, 2010, and 2013 to estimate the percentage of U.S. households holding financial assets, including U.S. savings bonds; the median value of such financial assets held by U.S. households, and the median income of households. The survey data include information on families’ balance sheets, pensions, income, investments, and demographic characteristics. We analyzed the U.S. population data as a whole and also considered the bottom two income quintiles separately. We chose these survey years because they provide a period of about 10 years prior to and 1 year after the discontinuation of the sale of paper savings bonds at financial institutions. We analyzed the U.S. population data as a whole and also considered the bottom two income quintiles separately. SCF data are based on probability samples and estimates are formed using the appropriate estimation weights provided with the survey’s data. Because each of these samples follows a probability procedure based on random selections, they represent only one of a large number of samples that could have been drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (i.e., plus or minus 2.5 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Unless otherwise noted, all percentage estimates have 95 percent confidence intervals that are within 5 percentage points of the estimate itself, and all numerical estimates other than percentages have 95 percent confidence intervals that are within 5 percent of the estimate itself. We also reviewed documentation on the SCF, such as codebooks and Federal Reserve bulletins. Third, we used aggregated data provided by the Internal Revenue Service (IRS) on income tax filers who used at least part of their tax refunds to buy paper savings bonds from 2010 through 2013 to analyze the number of tax filers who bought paper savings bonds, including those with adjusted gross incomes of $25,000 or below—the lowest income category reported in the data—and the amount of savings bonds they purchased. We also used the aggregated data to analyze refund options used by the tax filers (such as paper check and paper savings bond, direct deposit and paper savings bond, or paper savings bond only) and demographic information about the filers, such as their age. We assessed the reliability of the data we used by interviewing knowledgeable officials, and conducting manual testing on relevant data fields, such as the number of tax filers who participated in the program and amounts of savings bonds purchased. We found the data we reviewed to be sufficiently reliable for the purposes of our analyses. To examine the effect of Treasury’s elimination of paper U.S. savings bond, including on the savings bond program and bond purchases, we reviewed data on savings bonds purchases from 2001 through 2013, and analyzed trends in purchases for this time period, including before and after paper savings bonds were discontinued in January 2012. Specifically, to analyze long- term trends in savings bond purchases and more recent trends since the end of paper sales, we estimated two econometric models. The first model was based on a portfolio choice model, and modeled purchases as a function of interest rates, inflation, and economy-wide risk (using the Chicago Board Options Exchange’s Volatility Index). In other words, consumers may make savings bond purchase decisions the same way they make other decisions about financial portfolio allocation, based on risk and return considerations. The second model was based on linear and quadratic time trends to capture the long-term reduction in purchases. We included monthly seasonal effects in both models. The drop in savings bond purchases after the end of paper sales was consistent with long-term trends and generally not statistically significant. The drop in purchases after the end of paper sales also was consistent with the reduction in interest rates at the time (the coefficient on interest rates was highly statistically significant). As with any econometric model, our approach is imperfect and is unlikely to include all factors that influence savings bond purchases. Additional data over time might provide different or more definitive estimates of the change in purchases associated with the end of paper sales. We reviewed Federal Register releases on TreasuryDirect and its replacement system, the Treasury Retail Investment Manager; Treasury documentation, including a description of data in the monthly statement of public debt, estimates of cost savings from eliminating paper savings bonds, press releases, Bureau of the Fiscal Service’s President’s budgets and capital investment plans; and TreasuryDirect materials. To assess the reliability of Treasury’s cost estimates, we interviewed Treasury officials on how the estimates were determined and reported. We also interviewed Treasury officials to discuss a range of issues related to its savings bond program, including the benefits and costs of eliminating paper savings bonds, concerns raised about TreasuryDirect, and plans for replacing TreasuryDirect. To determine the extent to which Treasury’s Tax Time Savings Bond program has promoted savings, we analyzed IRS data on the use of the program by tax filers for tax years 2010 through 2013 (as discussed in detail above). We also reviewed IRS documentation on the program, such as descriptions on how the program operates and answers to common questions about the program, and studies on the Tax Time program published by academics and nonprofit organizations focusing on social or economic policy. We interviewed Treasury and IRS officials about the Tax Time program’s operations, benefits, costs, and future in terms of its expiration. To better understand the extent to which this program can help lower-income households to save, we interviewed nonprofit organizations focusing on social or economic policy, including Doorways to Dreams Fund, New America, Corporation for Enterprise Development, and MDRC. To examine the extent to which lower-income households are saving using financial products, we examined SCF data for survey years 2001, 2004, 2007, 2010, and 2013 (as described in greater detail above). Based on these data, we defined lower-income households as the lower two distributions (or quintiles) of households in the United States. To describe some of the government and nonprofit programs developed to promote savings by lower-income households, we conducted Internet and literature searches for research, initiatives, testimonies, and studies on savings programs targeting lower-income households and reviewed materials on such programs. We specifically reviewed select federal, state, local, and nonprofit programs targeting either long-term (such as retirement or asset accumulation) or short-term savings goals for lower- income households. For the purposes of this report, we focused on programs designed to promote savings using financial assets, such as bank accounts, bonds, mutual funds, and retirement accounts. We generally excluded programs designed to promote savings through home ownership or other nonfinancial assets. For federal programs, we focused our review on federal agencies involved in promoting financial literacy that are members of the multiagency Financial Literacy and Education Commission (FLEC). We interviewed six FLEC member agencies—the Departments of the Treasury, Housing and Urban Development, Health and Human Services, and Education; the Federal Deposit Insurance Corporation; and the Bureau of Consumer Financial Protection, also known as the Consumer Financial Protection Bureau—about their savings programs and reviewed related documentation. We also reviewed select state, local, and nonprofit programs targeting lower-income households. We selected these programs based on our research of savings programs for lower-income households and interviews with FLEC members and other stakeholders. For the programs we selected, we interviewed relevant program officials and reviewed documentation on the programs, including information on participation in the programs where available. Finally, we interviewed other relevant stakeholders, including Doorways to Dreams Fund, New America, Corporation for Enterprise Development, MDRC, Consumers for Paper Options, and academics. We conducted this performance audit from August 2014 to July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Richard Tsuhara (Assistant Director), Tarek Mahmassani (Analyst-in-Charge), Emily R. Chalmers, Michael Gitner, Michael Hoffman, Wati Kadzai, Robert Letzler, Marc Molino, Patricia Moye, and Andrew Stavisky made significant contributions to this report.
U.S. savings bonds provide Americans with an affordable way to save. In 2012, Treasury stopped selling paper savings bonds at banks as part of its broader electronic initiative. As a result, savings bonds generally must be purchased through TreasuryDirect®. The one exception is the Tax Time Savings Bond program, established in 2010 to enable taxpayers to use their tax refund to buy paper savings bonds. The program is one way for lower-income families to save. You requested that GAO examine Treasury's savings bond program, including the accessibility of TreasuryDirect, and other savings programs. This report examines (1) the effect of Treasury's elimination of paper U.S. savings bonds on the program and bond purchases, (2) the extent to which the Tax Time Savings Bond program has promoted savings by lower-income households and Treasury's future plans for the program, and (3) the extent to which lower-income households are saving and programs developed by federal agencies and others. GAO reviewed agency rules and other documents; analyzed Treasury, Internal Revenue Service, and other data, in part using economic models; and interviewed federal, state, and nonprofit entities and experts involved in savings programs. The Department of the Treasury's (Treasury) elimination of paper savings bonds made buying bonds more difficult for some customers, but GAO's analyses generally indicated that the decline in bond purchases after the change was not statistically significant. Treasury eliminated paper savings bonds in January 2012, after a long-term decline in savings bond purchases. It estimated the change would save about $70 million in program costs from 2012 through 2016. Except for the Tax Time Savings Bond program, customers who want to buy savings bonds must use TreasuryDirect—an online system that requires users to have Internet access and a bank account. Customers without both, which likely includes lower-income households, face challenges accessing TreasuryDirect. Treasury is in the early stages of developing a new system, the Treasury Retail Investment Manager (TRIM), to make it easier to buy savings bonds, such as by using a mobile device, which often is the primary means of accessing the Internet for many lower-income households. A little more than one-third of the users of Treasury's Tax Time Savings Bond program—the only way to purchase paper bonds—were lower-income tax filers (filers with an adjusted gross income of $25,000 or less), but the program's future is uncertain. Since 2010, tax filers have been able to use a tax form to buy paper savings bonds with their tax refund. For tax years 2010 through 2013, about 142,000 tax filers (less than 1 percent of tax filers receiving refunds) used at least part of their tax refund to buy nearly $72.5 million in savings bonds. Of these filers, about 55,000 had incomes of $25,000 or less and bought about $13.7 million in savings bonds, or about $250, on average, per filer each year. Treasury has been extending the program partly because the amount of bonds purchased and participation levels indicate that the program is providing benefits, but it generally has not considered the program's costs. In May 2015, Treasury officials told GAO that they plan to continue to extend the program until TRIM can provide a suitable electronic alternative. Because TRIM will require Internet access by computer or mobile device, Tax Time program users without such access may no longer be able to save by buying bonds with their refunds after TRIM is implemented. In prior work on agency stewardship of public funds, GAO reported that agencies, as a best practice, should consider both benefits and costs in considering alternatives related to program decisions. Without considering both, Treasury cannot make a fully informed decision on whether to discontinue the Tax Time program when an electronic alternative is available. On the basis of GAO's analysis of data from the most recent Survey of Consumer Finances conducted in 2013, the median value of financial assets held by the bottom fifth of income earners (whose median annual income was $14,200) was $550. Given the limited savings of lower-income households and savings challenges faced by such households, a number of federal agencies have developed programs to promote savings. For example, Treasury's my RA®, which is in a soft-launch phase, promotes retirement savings for individuals without access to employer-sponsored retirement plans. State, local, and nonprofit agencies also have initiated programs that promote savings for retirement, child development, or emergencies and generally target lower-income households. Eligibility requirements and participation vary by program. GAO recommends that as Treasury implements the TRIM system, it consider the benefits and costs of the Tax Time program in future decisions on whether to extend the program. Treasury agreed with GAO's recommendation.
You are an expert at summarizing long articles. Proceed to summarize the following text: Founded in 1863 by congressional charter, the National Academy of Sciences has a long history of serving as a scientific adviser. The Academy, which has a total membership of 4,800, also serves as an honorary institution to recognize distinguished members of the scientific community. Among other activities, the Academy also organizes symposiums, manages scientific databases, and serves as a clearinghouse for research. Throughout this report we use “Academy” to refer to the constituent members of the Academy complex: the National Academy of Sciences, the National Academy of Engineering, the Institute of Medicine, and the National Research Council. In 1916, the Academy formed the National Research Council to broaden its committee membership to include non-Academy members and to oversee the Academy’s advisory activities. In a 1998 report, the Academy reported that committee membership consists of 55 percent from academia, 24 percent from industry, 9 percent from nonprofit institutions, and 12 percent from different levels of government. The National Academy of Engineering and the Institute of Medicine were established in 1964 and 1970, respectively, to recognize distinguished members in these fields and to provide more specialized advice in these areas. The Academy is organized by study units, which produce reports in the following topic areas: transportation, health and safety, science, commerce, natural resources, defense, space, education, and international affairs. (See table 1.) The Academy issued 1,331 committee reports from January 1993 to June 1997 and had an average annual budget of about $150 million. During those 5 years, most of its work was performed for the federal government, which provided the Academy with 87 percent of its revenue. (See fig. 1.) The Departments of Transportation, Energy, Health and Human Services, and the Army; the National Science Foundation; and the National Aeronautics and Space Administration have been its largest federal sponsors—amounting to 75 percent of the total revenues for 1993 to 1997. The Academy also advises state governments, private industry, and nonprofit institutions, but that work is limited by internal Academy guidelines. In addition, the Academy may use its endowment to fund self-initiated studies deemed critical by the Academy leadership. The Federal Advisory Committee Act Amendments of 1997 addressed concerns over the openness of the Academy’s procedures. Prior to the amendments, the Academy’s committee procedures included some openness. A 1975 policy document stated that committee meetings where data would be gathered were to be open to the public with advance notice given. Announcements of scheduled open meetings were published monthly in a newsletter by the Academy’s Office of Information. However, the study unit heads determined which projects would have scheduled and announced open meetings. Executive meetings and working meetings, referred to as deliberative sessions, would “not normally be open to the public.” A 1995 proposed change to the Academy’s public access policy, among other things, further defined the types of meetings that could be closed and applied the policy uniformly across the Academy’s major study units. This proposal was under consideration at the time the amendments were enacted. According to Academy officials, the Academy had three main concerns that caused it to seek relief from the Federal Advisory Committee Act: (1) the erosion of independence if the Academy was under the influence of sponsoring agencies, (2) the inability to recruit committee members if committee deliberations were open to the public, and (3) the burden of administrative requirements that would render the Academy unresponsive to the government. Paramount among these concerns was the Academy’s independence from the influence of sponsoring agencies. Under the act, a federal government officer or employee would have to chair or be present at every advisory committee meeting. This individual would have the power to adjourn the meeting “whenever he determines it to be in the public’s interest.” According to Academy officials, the Academy could lose sole authority in appointing committee members, and the Academy and committee members could be under pressure from a sponsoring agency to change a report during the drafting process. Under the act and GSA regulations, advisory committee meetings, including deliberative meetings, would be open to the public. However, the Academy opposed opening its deliberative meetings to the public because it believed that such an action could stifle open debate and criticism of ideas in those meetings. The Academy was also concerned that the independence of the committees’ deliberations and the Academy’s review process would be jeopardized by attempts of sponsors and special interest groups to bring political pressure to bear. Academy officials said that closed committee deliberations are fundamental for ensuring the independence of their studies and the scientific quality of their reports. Moreover, they stated, if draft reports were available to the public, the first draft would become the enduring impressions of a report, regardless of any changes made later. In addition, the President of the Academy said that it could be more difficult to recruit potential committee members in the future if deliberations were open to the public. We surveyed 12 current and former Academy committee members to obtain their views on whether or not they would serve on Academy committees if the deliberative meetings were open to the public. Two members said that they would serve, six said that their decision to serve would depend on the topic of study, and three said that they probably would not serve on a committee whose deliberations were open to the public. One member did not respond directly to the question but said that closed deliberative sessions encourage greater candor among the members. In addition, these members generally echoed the Academy officials’ views regarding the need for closed deliberative sessions. The three members who responded that they would probably not serve said that open deliberations could seriously jeopardize the quality of the reports. Two members said that Academy study committees might be difficult to staff if deliberations were open to the public. Eleven out of 12 respondents indicated that the Academy should retain the ability to close committee deliberations. Finally, the Academy was concerned that the amount of time and expense associated with implementing the act would render the Academy unresponsive to the government in general and to the Congress in particular. Of particular concern was the requirement under the act that each committee have a charter. Since the Academy is not a federal agency, the federal agency sponsoring the Academy study would prepare the charter and submit it for review by GSA. Academy officials estimated that the process would take between 6 and 12 months, on average, a length of time that an Academy official said would render the Academy unresponsive to the government’s requests for information. In addition, most of the Academy’s studies are funded by multiple agencies. Thus, the Academy was not certain which agency would be responsible for fulfilling the administrative requirements of the act. Academy officials also pointed out that applying the act to the Academy would more than double the number of committee charters that GSA would have to review each year. Prior to the enactment of the amendments of 1997, the Academy established a number of procedures for committee work that are intended to help ensure the integrity and the openness of committee activities. The procedures consist of the following phases: project formulation, committee selection, committee work, report review, and report release and dissemination. (See fig. 2.) According to Academy officials, the whole process can take anywhere from 4 months to 2 years (usually from 6 to 18 months). During the project formulation phase, the Academy assigns the project to a study unit. According to Academy guidance, the study unit is responsible for defining the scope of the project, leaving room for the committee to further define the study, and for developing the initial cost estimates. After the study unit approves the project, the Academy gives final approval for the project. Then a contract, grant, or cooperative agreement (depending on the sponsor) is drawn up and entered into with the agency. A permanent Academy staff member, referred to as the responsible staff officer, is assigned to organize and support the project. The staff officer is responsible for ensuring that institutional procedures and practices are followed throughout the study and that the study stays on schedule and within budget. According to the Academy’s documents, each project is conducted by a committee of subject matter experts who serve without compensation. Committee selection starts with suggestions from the sponsoring organization, members of the Academy, outside professional colleagues, and Academy staff. After review of the suggestions, the President of the Academy selects committee candidates. The Academy’s procedures require that each committee candidate fill out a form on his or her potential conflicts of interest. The form consists of five questions asking for the member’s relevant organizational affiliations, financial interests, research support, government service, and public statements and positions concerning the committee’s topic. We reviewed a sample (about 10 percent) of the 331 current committees to determine whether the forms had been filed and found that the Academy’s procedures were generally being followed. Under Academy procedures, 5 of the 30 committees selected were not required to file the conflict-of-interest forms because they were not subject to section 15 for various reasons. Of the remaining 25 committees, we found that almost all members (316 out of 341 or 93 percent) had forms on file. At the first meeting of every committee, the Academy’s procedures require a confidential discussion among committee members and project staff of potential conflicts of interest. If a conflict of interest is identified, the committee member may be asked to resign from the committee. If the Academy determines that the conflict is unavoidable, the Academy will make the conflict public and will retain the committee member. After this meeting, the executive director of the relevant study unit makes a tentative determination of whether the committee as constituted is composed of individuals with the requisite expertise to address the task and whether the points of view of individual members are adequately balanced such that the committee as a whole can address its charge objectively. Final approval of the committee membership, however, rests with the President of the Academy. Committees meet in data-gathering sessions that are generally open to the public and in deliberative sessions that are closed to the public. The Academy defines a data-gathering meeting as “any meeting of a committee at which anyone other than committee members or officials, agents, or employees of the institution is present, whether in person or by telephone or audio or video teleconference.” Committees also meet in closed sessions to discuss financial and personnel matters, to discuss conclusions, and to draft the committee report. The Academy’s responsible staff officer facilitates the meetings. In order to identify the number of open versus closed meetings, we reviewed the meetings held from December 1997 through June 1998 for the 331 committees. Since we found that most meetings were a combination of open and closed sessions, we identified the number of open and closed hours during these meetings. Of the 331 committees, 129 either had no meetings or were not subject to section 15 for various reasons. The remaining 202 committees held a total of 353 meetings. For 300 (or 85 percent) of those meetings, at least some portion of the meeting was closed. For 139 of the 300 meetings where complete information about open and closed sessions was available, we found that slightly less than half (45 percent) of the time was spent in closed sessions. For 251 projects, we determined the reasons for the closed sessions: 61 meetings included discussions of potential bias of committee members, 36 meetings included discussions of the committee’s composition and balance, and 201 meetings involved drafting the committee report. We also found that seven data-gathering meetings were closed under Freedom of Information Act exemptions. Every report is the collective product of the committee. According to the Academy’s documents, a committee member may draft a chapter or portion of a report, but the author of record is the entire committee. The Academy’s responsible staff officer can help with many aspects of developing the report, including researching, integrating portions of the report written by committee members, and ensuring consistent style and format, but the conclusions and recommendations are attributed to the committee as a whole. Throughout its work, the committee is subject to the oversight of the Academy’s supervisory boards and commissions. The next step in the process is an independent review of the draft by individuals whose review comments are provided anonymously to the study committee. This process allows the Academy to exercise internal oversight and provides an opportunity for the study committee to obtain reactions from a diverse group of people with broad technical and policy expertise in the areas addressed by the report. The anonymity of the reviewers is intended to encourage individual reviewers to express their views freely and to permit the study committee to evaluate each comment on its merits without regard for the reviewer’s position or status. The Academy Report Review Committee, composed of members of the Academy, oversees the report review process and appoints either a monitor and/or coordinator depending on the type of study. Liaisons are appointed from the Academy’s membership to the major study unit for the purpose of suggesting qualified reviewers. The monitor and/or coordinator either participates in the selection of reviewers or checks the list of reviewers for their relevant expertise or particular perspective. Typically six to eight reviewers are appointed, although more are acceptable for a major policy report. According to the Academy’s report review guidelines, the review of a manuscript takes about 10 weeks, on average, from when a report is sent to the reviewers until final approval; however, the time ranges from a few days to many months. The reviewers look at whether or not the report addressed the committee’s charge; findings are supported by the evidence given; exposition of the report is effective; and tone of the report is impartial. All study committee members are given copies of the reviewers’ comments (with the names of the reviewers removed from the comments) in time to prepare or approve a response to the comments. After the comments have been submitted, the monitor and/or coordinator may prepare a brief summary of the key review issues for the study committee. The study committee may provide a written explanation of how each comment was handled, or it may address the key review issues. The monitor and/or coordinator judges the adequacy of the committee’s responses and may require a resubmission to the reviewers. The Academy’s procedures state that no report is to be released to the project sponsor or the public, and no findings or recommendations are to be disclosed until this review process has been satisfactorily completed. All committee members are contacted to ensure that they approve the report before it is published or released. The Report Review Committee chair provides the final approval of the reports. The Academy is responsible for the report’s dissemination plan. The report sponsor may also be involved in developing the plan. Targeted groups are selected to ensure that the report reaches all appropriate audiences. The report may also be made available via the National Academy Press web site. Briefings are often arranged for interested groups, and reports may become topics of future Academy workshops or symposia. The Academy developed a web site for current project information to increase public access as a result of section 15, added by the Federal Advisory Committee Act Amendments. However, we found that this information is not always posted in a timely manner and is sometimes incomplete. Among other things, section 15 generally requires the Academy to make names and brief biographies of committee members public, post notice of open meetings, make available written materials presented to the committee, post summaries of meetings that are not data-gathering meetings, make copies of the final committee report available to the public, and make available the names of the principal non-Academy reviewers of the draft report. The committee members’ names and biographies, notice of open meetings, and summary minutes of closed meetings are available on the web site of current projects. Copies of reports, which include the names of the external reviewers of the reports, are available on the National Academy Press web site. According to Academy officials, written materials presented to the committees by individuals who are not agents, officials, or employees of the Academy are available for inspection at the Academy’s public reading rooms in Washington, D.C. We reviewed a sample of the 331 current projects to determine whether the database included the names of the committee members. Five of the 30 projects that we reviewed were not required by the act or by the Academy to post committee membership for various reasons. We found that 24 of the 25 projects had the names of the members available on the web site. Five projects had only the names of the members and no biographical statements. However, these five committees were not required to post biographies because the committees were created prior to the act. The Academy’s guidelines state that the summary minutes for closed meetings should be posted to the web site, preferably within 10 business days of the meeting. In order to determine whether this requirement was met by the Academy, we reviewed data on the closed meetings for the 202 committees that held meetings from December 17, 1997, through June 17, 1998. As previously stated, these committees held a total of 353 meetings, with 300 of those meetings having some portion closed. We found that 270 (or 90 percent) had the minutes of the closed sessions on the web site.The minutes of these closed sessions had an average posting time of 13.5 calendar days, within the Academy’s guidelines of 10 business days. However, the amount of time to post the minutes ranged from 0 to 124 calendar days, with 26 percent of the minutes posted 15 or more days after the meeting. At the time of our audit, spot checks of information posted on the web site were conducted at least once a week for missing or improper information. However, we found that for a total of 63 out of 331 current committees (about 19 percent) there were chronological or typographical errors or missing data in the information provided on one or more of the meetings. For example, the listings of the meetings for three projects were out of order. One meeting had two different dates listed on the project web site. For 34 projects, the agenda or summary minutes were not posted. The Academy has already taken action to correct this information or has adequately explained these specific problems. In addition, since we conducted our audit, the Academy created a records officer position responsible for checking the timeliness and accuracy of data on a daily basis. Through the web site, the Academy also elicits public comments about committee composition. The public is allowed 20 calendar days to comment about the proposed committee members and/or suggest new members. Since the web site’s inception in December 1997 through June 1998, the Academy received a total of 120 comments. Only 13 of those comments concerned committee composition—all concerning four committees: those on smokeless and black powder, illegal drug policy, repetitive motion and muscular disorders, and cancer research among minorities. Of these comments, six included suggestions for additional committee members, three provided general or positive comments about committee membership, three included negative comments regarding specific committee members (one of the three members later was removed from consideration), and two comments discussed the length of the public comment period. Prior to the passage of the Federal Advisory Committee Act Amendments, the Academy had efforts under way to increase public access to and participation in the Academy’s committee work. After the amendments were passed, the Academy’s web site of current projects increased public access to project information. However, the Academy had to quickly create and operationalize its web site of current projects in December 1997 and additional enhancements are under consideration pursuant to suggestions received from the public. Thus, it will be some time before an assessment can be made of the extent to which the general public uses the web site. Regarding the untimely posting of data and incomplete data, the Academy’s new procedures should address our concerns. However, the availability of timely information on current projects depends on the effective implementation of the new procedures. We provided a draft of this report to the National Academy of Sciences and GSA for their review and comment. In general, the Academy said that the report was accurate and balanced. Regarding our finding that the Academy’s data available on the web site are not always timely or complete, the Academy believed that it was important to note that in no case was there a violation of the requirements of section 15. We agree. Since section 15 does not provide a time frame for posting summaries of closed meetings, we noted instances in which data were untimely by the Academy’s own guidelines and instances in which the information provided had some errors. The full text of the Academy’s comments appears in appendix I. GSA had no comments on the report. To determine why the Academy sought relief from the act, we interviewed Academy officials and reviewed their statements to the Congress. We also talked with several committee members to obtain their views on the act—the Academy selected the committee members, with input from us. Each Academy study unit and the Presidents of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine selected members to respond to our questions. The Academy narrowed this sample, and each candidate was asked whether he or she would participate in the survey. The sample included past and current committee members and chairs of committees from across the country and from private industry, academia, and not-for-profit institutions. To identify the Academy’s procedures for providing advice to the federal government, we interviewed Academy officials. We also reviewed the Academy’s internal documents outlining the procedures, the treasurer’s reports, and annual reports. To determine whether the Academy had implemented section 15, we interviewed Academy officials and reviewed official documents. We also reviewed the Academy’s web site information, including committee meeting agendas for both open and closed portions of meetings and the content of the closed meetings as described in summary minutes, for Academy projects that were active as of June 17, 1998. To make this determination, we calculated the hours of open and closed meetings, calculated the time in which summary minutes were posted for closed meetings, and categorized the reasons for closed meetings. Each step was verified for accuracy and completeness. Only meetings that occurred in the 6-month period from December 17, 1997, to June 17, 1998, were analyzed. Of the 331 current Academy projects, 69 had no meetings within the stated 6-month time frame, and 24 had no meetings whatsoever. Thirty-six projects were standing committees that were not subject to section 15 and were therefore excluded from our analyses. None of the current project information from the web site was independently verified against the Academy’s original records. For the analysis of open versus closed hours, we considered only the 139 meetings with both open and closed hours. For the closed meetings, we looked only at those meetings with summary minutes or with posted agendas. Of the 300 possible meetings with some closed sessions, 294 were analyzed to determine the reasons for the closed sessions. To measure the Academy’s compliance with the section 15 requirement to make committee members’ names and biographies available for public comment, we reviewed a random sample of 30 current projects’ potential bias and conflict-of-interest forms to determine whether they were present in the Academy’s files and signed by the committee members. We compared the Academy’s files to the committee’s printed lists from the Academy’s current projects web site. Projects that did not have meetings within the December 17, 1997, to June 17, 1998, time frame were not sampled. We conducted our work from May through November 1998 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report for 10 days. At that time, we will send copies of this report to the President of the National Academy of Sciences and the Administrator of the General Services Administration. We will also make copies available to others on request. Please call me at (202) 512-3841 if you or your staff have any questions concerning this report. Major contributors to this report were Diane B. Raynes, Gregory M. Hanna, Lynn M. Musser, and Robin M. Nazzaro. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the committee process at the National Academy of Sciences, focusing on the: (1) reasons the Academy sought relief from the Federal Advisory Committee Act; (2) Academy's committee procedures for providing advice to the federal government; and (3) Academy's implementation of the new requirements for providing information to the public. GAO noted that: (1) according to Academy officials, the Academy sought relief from the act for a number of reasons; (2) central to its concerns was the Academy's ability to maintain sole authority in appointing committee members and to conduct its work independently from sponsoring agencies' influence; (3) in addition, the Academy opposed opening deliberative meetings on the grounds that such an action could stifle open debate and could impact the Academy's ability to recruit committee members; (4) finally, the Academy was concerned about the amount of time and expense to perform the administrative requirements of the act, which could render the Academy unresponsive to the government; (5) prior to the enactment of the amendments, the Academy developed a number of procedures governing its committees' activities, including project formulation, committee selection, committee work, report review, and the release and dissemination of reports; (6) according to Academy officials, these procedures are intended to help ensure the integrity of advice provided to the federal government; (7) for example, committee selection includes procedures for identifying conflicts of interest and potential bias of committee members; (8) the committee work phase provides an opportunity for some public participation, and committee reports are reviewed by an Academy review committee before they are released to the sponsoring agency and the public; (9) in response to section 15, the Academy developed a web site to increase public access to current project information, however, GAO found that some descriptive information on current projects was not always posted in a timely manner and was not always complete; and (10) during this audit, the Academy addressed these problems and developed additional written guidelines regarding the posting of committee information as well as additional quality assurance procedures.
You are an expert at summarizing long articles. Proceed to summarize the following text: Mercury enters the environment through natural and man-made sources, including volcanoes, chemical manufacturing, and coal combustion, and poses ecological threats when it enters water bodies, where small aquatic organisms convert it into its highly toxic form—methylmercury. This form of mercury may then migrate up the food chain as predator species consume the smaller organisms. Through a process known as bioaccumulation, predator species may consume and store more mercury than they can metabolize or excrete. Fish contaminated with methylmercury may pose health threats to people that rely on fish as part of their diet. Mercury harms fetuses and can cause neurological disorders in children, including poor performance on behavioral tests, such as those measuring attention, motor and language skills, and visual-spatial abilities (such as drawing). The Food and Drug Administration (FDA) and EPA recommend that expectant or nursing mothers and young children avoid eating swordfish, king mackerel, shark, and tilefish and limit consumption of other potentially contaminated fish. These agencies also recommend checking local advisories about recreationally caught freshwater and saltwater fish. According to EPA, 45 states issued mercury advisories in 2003 (the most recent data available). According to the United Nations Environment Program, global mercury emissions are uncertain but fall within an estimated range of 4,850 to 8,267 tons per year. Of this total, EPA estimates that man-made sources in the United States emit about 115 tons per year, with about 48 tons emitted by power plants. Because mercury can circulate for long periods of time and be transported thousands of miles before it gets deposited, it is difficult to link mercury accumulation in the food chain with individual emission sources. The United States has 491 power plants that rely in whole or in part on coal for electricity generation, and these plants produced 52 percent of all electricity generated in 2004, according to DOE’s most recent data. These plants generally operate by burning coal in a boiler to convert water into steam, which in turn drives turbines that generate electricity. Figure 1 provides a general overview of a power plant’s layout. Power plants burn at least one of the three primary coal ranks— bituminous, subbituminous, and lignite—and plants may burn a blend of different coals, according to DOE. Of all coal burned by power plants in the United States in 2004, DOE estimates that about 46 percent was bituminous, 46 percent was subbituminous, and 8 percent was lignite. The amount of mercury in coal and the relative ease of its removal depend on a number of factors, including the geographic location where it was mined and chemical variation within and among coal ranks. Coal combustion releases other harmful air pollutants in addition to mercury, including sulfur dioxide and nitrogen oxides. EPA has regulated these pollutants since 1995 and 1996, respectively, through its program intended to control acid rain. In addition, the March 2005 interstate rule will require further cuts in these pollutants beginning in 2009. To comply with these and other regulations, the coal-fired power industry has installed a variety of technologies that, while intended to control nitrogen oxides, particulate matter, or sulfur dioxide, may also affect or enhance mercury capture. Examples of such technologies include selective catalytic reduction (SCR) for nitrogen oxides, electrostatic precipitators (used by about 80 percent of all facilities) and fabric filters (used by the remaining 20 percent) to control particulate matter and wet or dry scrubbers to remove sulfur dioxide. EPA estimates that power plants capture about 27 tons of mercury each year, primarily through the use of controls for other pollutants. In general, the exhaust from coal combustion (called flue gas) exits the boiler and may flow through a device intended to control nitrogen oxides before entering the particle control device and then through a scrubber prior to release from the smokestack. The combination of these devices in use at power plants differs greatly among facilities and is likely to change as a result of the interstate rule, which, according to EPA, will result in additional installations of equipment to control nitrogen oxides and sulfur dioxide. EPA believes that the steps power plants will take to control nitrogen oxides and sulfur dioxide under the interstate rule will enable them to meet the first phase mercury cap of 38 tons beginning in 2010. As noted above, EPA determined that mercury control technologies were not commercially available and that the agency could not reasonably impose requirements to use them in the near-term. Nonetheless, a number of mercury control technologies have been developed over the past several years as a result of public and private investments in research and development, and these technologies generally fall into the following categories: Sorbent (carbon-based, chemically enhanced carbon-based, and non-carbon based). This technology involves injecting a powdered substance (sorbent) into the flue gas that binds to mercury prior to collection in a particle control device. Regardless of the chemical composition of the sorbent, this technology involves adding a silo or other structure containing the sorbent and a system that injects the sorbent into ducts that carry the flue gas. Enhancements to existing controls for other pollutants to increase mercury capture. This class of technologies focuses on retrofitting existing controls for other pollutants to improve their ability to capture mercury. Examples of enhancements include adding sorbents to wet scrubbers used for sulfur dioxide removal or modifying selective catalytic reduction devices used to reduce nitrogen oxides. Multipollutant controls. This class of technologies is designed from the outset to simultaneously control or enhance the removal of multiple pollutants, such as mercury, nitrogen oxides, or sulfur dioxide. These technologies may also use sorbents. Oxidation technologies. This class includes methods, chemicals, or equipment designed to oxidize mercury into a form that is more readily captured. Other technologies. This category includes other technologies that capture mercury using approaches such as removing mercury from coal prior to combustion and fixed adsorption devices that rely on precious metals such as gold to separate mercury from flue gas. The intended location of these technologies in a power plant’s overall layout may vary. As shown in figure 2, some may be located between the boiler and the particulate matter collection device, while others may be located further downstream in a plant’s process. This figure also shows that some plants can either install sorbent injection upstream of the existing particulate matter removal device or downstream of the device using a supplemental filter to collect the spent sorbent, keeping it separate from the fly ash collected in the particulate matter collection device. The latter configuration may be relevant for those facilities that sell their fly ash as a raw material for use in other applications, such as cement manufacturing, because carbon-based sorbent can render fly ash unsuitable for some of these applications. According to EPA, power plants sell about 35 percent of their fly ash for use in other applications, with 15 percent going to uses, such as cement manufacturing, where carbon contamination could pose a problem. The Department of Energy’s (DOE) National Energy Technology Laboratory partners with the private sector to evaluate the use of mercury control technologies at power plants in tests lasting up to 5 months. The testing program focuses on mercury controls, such as sorbent injection, and ways to better and more consistently capture mercury with technologies for other pollutants. Participants in DOE’s program evaluate concepts in laboratories and develop promising technologies in progressively larger-scale applications, including actual power plants. The duration of the tests that have been completed has varied from several hours to 5 months, with most of the completed DOE-funded tests lasting between 1 week and several months. The most recent phase of DOE testing has focused on the longer-term performance of mercury control technologies. Appendix III provides more information on the DOE tests completed, ongoing, or planned as of February 2005. Power plants in the United States do not currently use mercury controls, but some technologies are available for purchase and have shown promising results in full-scale tests in power plants. These tests have shown that mercury controls known as sorbent technologies—which involve injection of a powdered material that binds to mercury in the plant’s exhaust—have shown the greatest effectiveness in removing mercury during tests at power plants. However, long-term test data are limited because most of these tests have lasted less than 3 months. According to all 40 survey respondents, coal-fired power plants were not, as of November 2004, using mercury controls, although several plants have subsequently announced plans to install them. The coal-fired power industry has not used mercury controls because, prior to EPA’s March 2005 rule, federal law had not required mercury emissions reductions at power plants. In fact, most of the power industry survey respondents (13 of 14) cited uncertainty about future regulations as one of the top three reasons for not installing mercury controls. Thus, in the absence of federal requirements to reduce mercury emissions, limited demand existed for mercury controls. We found that although some mercury controls, such as activated carbon injection, are currently available for purchase from vendors, perceptions about their availability vary widely among stakeholders, primarily because stakeholders do not consistently define “availability.” That is, some stakeholders believe that mercury controls become available when they have been demonstrated in long-term tests under normal commercial operations, rather than when they are available for purchase. Thus, some stakeholders’ views on availability reflect more of a judgment about the proven effectiveness of a control technology than their availability for purchase. In this context, we found that views regarding the availability of mercury controls generally varied by stakeholder group and by the type of control. A greater portion of the vendors described mercury controls as available than either of the other two groups we surveyed, with the power industry group citing these controls as available least frequently. As shown in figure 3, the stakeholders were overall most optimistic about the availability of activated carbon injection technologies, followed by multipollutant controls and enhancements to existing controls for other pollutants. Appendix IV provides more detailed information on stakeholder perceptions of the availability of mercury controls. In evaluating the availability of mercury controls prior to finalizing the March 2005 mercury rule, EPA found that mercury controls were available for purchase but concluded that they had not been sufficiently demonstrated in long-term tests, and therefore were not available for permanent installation at power plants before 2010. As a result, EPA set the 2010 mercury reduction targets at a level that power plants could achieve as a side benefit of using technologies for other pollutants that the agency expects many plants will install to comply with the interstate rule, and set more stringent limits for 2018. Thus, power plants will not need to install mercury-specific controls until well after 2010. According to an EPA white paper assessing test results as of February 2005, the agency expects that mercury control technologies will be available for commercial application on most, if not all, key combinations of coal type and control technology to provide mercury removal levels between 60 and 90 percent after 2010 and between 90 and 95 percent in the 2010-2015 time frame. Because mercury controls have not been permanently installed at power plants, the data on the performance of these technologies come from field tests. We obtained data from 29 completed field tests, including 13 which were part of DOE’s mercury control research and development program, and 16 other tests identified by survey respondents. Most of the available test data (21 of 29 tests) related to the effectiveness of sorbents. According to DOE and EPA, the tests have shown promising results, although the extent of mercury removal varies at each plant. Tests of varying duration have identified sorbent technologies as the most developed mercury controls, which show promising results in achieving high mercury reductions. For example, tests of activated carbon and chemically enhanced carbon-based sorbents at power plants using a variety of air pollution controls have shown average reductions of 30 to 95 percent overall, providing the following average mercury reductions for each coal type: 70-95 percent average removal on bituminous coals; 30-90 percent average removal on subbituminous coals; 63-70 percent average removal on lignite coals; and 94 percent removal on blends of bituminous/subbituminous coals. As the scale and duration of testing has increased, researchers have gained a better understanding of site-specific variables that affect results, and more recent full-scale, monthlong tests, particularly those using chemically enhanced carbon-based sorbents, have shown sustained high removal rates. For example, a monthlong test conducted in 2004 showed that a chemically enhanced sorbent reduced mercury emissions from a primarily subbituminous blend of coal by 94 percent, and a monthlong test of another chemically enhanced sorbent at a different plant burning subbituminous coal achieved a 93 percent reduction. A number of the stakeholders we surveyed pointed out that the results of a particular test cannot be generalized or extrapolated to estimate potential reductions at other power plants because the reductions achieved during a test may have resulted in part from factors unique to that facility, such as its size, the type of boiler used, the temperature of its flue gas, or the combination of controls for other pollutants. For example, available data show that the extent of mercury reduction achieved by sorbent injection at facilities using electrostatic precipitators depends largely on the location of these devices at the plant. The location of an electrostatic precipitator in turn affects the temperatures of the flue gas entering the device, with more mercury captured at cooler temperatures. Thus, the results achieved at a particular plant may not serve as a reliable indicator of the performance of that control at all plants. DOE’s research and development program has funded tests of mercury controls on each coal type in light of its and EPA’s conclusions that the form of mercury emitted—which varies by coal type—and other chemical variations among coal types, such as chlorine content, can have an impact on a control’s removal effectiveness. For example, lower removal rates in activated carbon injection tests have occurred primarily at plants burning low rank coal or at plants with existing controls that are less conducive to mercury removal. One university-based researcher attributes the challenge of mercury reductions on lignite—a low rank coal—to its chemical composition, but believes that chemically enhanced sorbents and special additives can improve the ability of the sorbent to bind to this form of mercury, thereby addressing this problem. The more recent mercury removal results we reviewed tended to support this view as monthlong tests using chemically enhanced carbon-based sorbents achieved average reductions of 70 percent or greater on low-rank coals, including lignites, suggesting that this technology may achieve high-level mercury reductions from low-rank coals (See app. III for more information on these results). Since most of the field tests have focused on sorbent injection, fewer data are available on the performance of non-sorbent mercury controls, such as multipollutant controls, enhancements to existing controls, and mercury oxidation technologies. Results from 11 of the 19 tests of such controls were not yet available (9 of the tests were not planned to begin until after February 2005). The few available results show that average mercury removal achieved by multipollutant controls and enhancements has ranged from about 50 percent to 90 percent. The field tests of mercury oxidation technologies, multipollutant controls, enhancements and other non- sorbent technologies, lasting several days to several months, have included all coal types, but most (7 of 10) to date have focused on bituminous coal. In addition, a future DOE project will fund a test of a multipollutant control on a plant burning subbituminous coal and three tests of mercury controls, including mercury oxidation and enhancements, on plants burning lignite coal. As noted above, EPA determined as part of its March 2005 mercury rule that it could not reasonably impose requirements that would force the use of mercury-specific controls before 2010. Specifically, EPA believes that chemically enhanced carbon-based sorbents could reduce mercury emissions at a broad spectrum of plants but regards long-term testing as necessary in order to evaluate (1) the mercury removal performance of technologies when operated continuously for more than several months at a time; and (2) the impact that these controls have on a plant’s overall efficiency and operations. Furthermore, DOE officials have said that while sorbent injection holds much promise, it is unwise to depend solely on one approach for mercury control in part because the site-specific variables at each power plant affects the performance of mercury controls. DOE has concluded that it will be necessary to build a broad portfolio of mercury control options. Likewise, technical papers and presentations about the field tests by research and development participants express a high degree of confidence in the capability of sorbents, particularly chemically enhanced carbon- based sorbents, but also suggest the need for additional evaluation of the impact of these controls, if any, on the efficiency and reliability of power plants. For example, a paper written by a sorbent vendor conducting DOE- funded tests concluded that recent monthlong tests of chemically enhanced carbon-based sorbent injection have shown high mercury removal at plants that burn subbituminous coals, but also discussed concerns about the long-term use of this control on a power plant’s operations. This vendor concluded that although these tests did not show any adverse effects resulting from the chemically enhanced carbon-based sorbent, concerns and issues surrounding the contamination of fly ash that can render it unsuitable for sale for certain applications have not yet been resolved. With regard to potential adverse impacts at plants, no serious adverse effects have been associated with sorbent injection tests lasting up to 1 month in duration, according to EPA. To provide additional perspective on the expected long-term performance of mercury controls, we asked survey respondents to indicate whether they believed power plants could use mercury controls to achieve industrywide mercury reductions of 50, 70, or 90 percent by 2008. We also asked the respondents whether their perceptions would differ if the reductions were averaged across the industry (as in an emissions trading program) or if they were required at each plant. We found that many survey respondents (22 of the 38 answering this question) were confident in the ability of power plants to achieve a 50 percent reduction by 2008 regardless of whether the reductions were achieved at each plant or averaged across the industry. EPA set the mercury emissions cap for 2010 based on a 50 percent reduction from the 75 tons in coal. The stakeholders were progressively less confident in the ability of plants to achieve 70 and 90 percent reductions by 2008. For the 70 percent reduction scenario, stakeholders were more confident in the ability of plants to achieve this reduction averaged across the industry rather than at each plant; 16 stakeholders described themselves as confident or very confident in the ability of plants to achieve this level of reduction nationwide, while 21 described themselves as less confident or not at all confident. For the 90 percent scenario, the vast majority of the survey respondents (33 of 38 that answered this question) described themselves as not at all confident or less confident in the ability of plants to achieve this level of reduction nationwide by 2008. Appendix V summarizes the survey responses for each of the three scenarios. Furthermore, we asked the 40 survey respondents to identify additional testing needed to assess the ability of mercury control technologies to effectively and reliably reduce mercury emissions by 70 percent. Most of the survey responses (40 of 45) showed that stakeholders believe that some additional testing is needed for at least one technology. For example, the 14 power industry respondents said that additional testing is needed for sorbent injection. In addition, 3 of the 4 carbon-based sorbent vendors answering this question as well as 9 of the 12 researchers and government officials believed that some additional testing is needed to show that carbon-based sorbent injection would reliably and effectively achieve mercury reductions of 70 percent. Three policy stakeholders representing the power industry believed that more tests are needed to evaluate factors such as the performance of controls on low-rank coals, the impact on small power plants, and the ability of plants to use mercury controls without compromising electricity generation. Several of the power industry respondents expressed concern about the potential for mercury controls to interfere with a plant’s overall efficiency or cause malfunctions, and a power industry representative pointed out that such disruptions are a concern because power plants cannot store electricity for use as a backup when they experience technical problems. Information from ongoing and planned long-term tests will provide important information on both the long-term performance of mercury controls and the effect, if any, that these controls have on the efficiency or reliability of power plants. In addition, several plants have recently announced plans to install mercury controls to comply with either state permit requirements or the terms of legal settlements. For example, a power plant in New Mexico announced in March 2005 that it would install sorbent injection within the next 2 years to reduce mercury emissions as part of a settlement agreement with two environmental groups. A plant representative stated that while he believes sorbent technology “is not that advanced … it is advanced enough to use it to reduce mercury emissions” at the power plant. Another power plant currently under construction in Iowa has a state air pollution permit requiring the company to control mercury emissions and is installing sorbent injection technology. The company expects to reduce mercury emissions from subbituminous coal by 83 percent. Finally, under an agreement with the state of Wisconsin, a Michigan power plant owned by a Wisconsin-based company has begun to install a multipollutant control that will use sorbent injection to reduce mercury and other pollutants. The estimated costs to install and operate mercury controls vary greatly and depend on a number of site-specific factors, including the amount of sorbent used (if any), the ability of existing air pollution controls to remove mercury, and the type of coal burned. EPA and DOE have developed the most comprehensive estimates available for mercury controls based on modeling and data from a limited number of field tests, making them both preliminary and uncertain. These estimates, as well as other available estimates, focus on sorbent injection, the most developed mercury control technology. Estimated costs for sorbent injection vary greatly depending on whether facilities achieve mercury reduction targets by using this technology in combination with their existing air pollution control devices or instead add fabric filters to collect the spent sorbent. Regardless of the exact costs of the controls, most of the stakeholders we contacted generally expect the costs to decrease over time. The available cost estimates are projections based on a limited number of tests, primarily of activated carbon injection. The cost estimates we reviewed show that the total costs of installing and operating mercury controls vary depending on factors such as sorbent consumption, the ability of existing air pollution controls to remove mercury, and the type of coal burned. We discuss each of these factors in more detail below: Sorbent consumption: The amount of sorbent that a facility needs to use greatly influences control cost estimates. According to DOE, sorbent consumption levels for activated carbon injection technology directly relate to the desired level of mercury control. Further, while increasing the amount of carbon injected increases mercury removal, the performance of the carbon eventually levels off, requiring increasingly greater amounts of carbon to achieve an incremental mercury reduction. For example, test data from a plant burning subbituminous coal show that more than twice as much sorbent would be needed to remove 60 percent of the mercury from the plant’s flue gas than to remove 50 percent. Therefore, the cost of the activated carbon can increase dramatically, depending on the desired level of mercury removal and the type of coal burned. Other air pollution controls already installed: The air pollution controls already installed at a facility—especially fabric filters and electrostatic precipitators used for controlling particulate matter—can have a major effect on the cost of controlling mercury because some of these devices already remove varying amounts of mercury. For example, DOE’s tests have shown that fabric filters generally remove more mercury than electrostatic precipitators. Thus, facilities with fabric filters may already remove enough mercury to achieve a desired or required level of reduction. However, plants that do not have an existing fabric filter and choose to install one may incur significant costs due to their high capital expense. Additionally, EPA believes that controls for other pollutants some plants will install to comply with the interstate rule—such as selective catalytic reduction to control nitrogen oxides and wet scrubbers to control sulfur dioxide—will result in further mercury capture. Therefore, the combination of other air pollution controls may reduce or in some cases eliminate the need for a plant to install mercury-specific controls to reduce its mercury emissions. As noted above, EPA based its mercury reduction goals for 2010 to 2018 on the level of control it expects plants will achieve with controls for these other pollutants. Type of coal burned: According to EPA, the amount of mercury captured by a given control technology is generally higher for plants burning bituminous coals than for those burning subbituminous coals. This difference arises because the flue gas from bituminous coal contains higher levels of substances that facilitate mercury capture. Along these lines, DOE’s cost estimates assume that an electrostatic precipitator will capture 36 percent of mercury from plants that burn bituminous coal, but none of the mercury from plants that burn subbituminous coal. Thus, DOE estimated that mercury removal costs are higher for subbituminous-fired plants than bituminous-fired plants. Most of the available cost estimates for mercury control focus on sorbent injection, the most developed technology. DOE and EPA have developed comprehensive cost estimates; however, they are preliminary and, in EPA’s case, based on model plants rather than actual power plants. Further, while DOE developed its estimates from tests in power plants, the agency indicated that its mercury control costs may be off by as much as 30 percent in either direction because (1) the estimates were developed from a limited data set of relatively short-term tests and thus are highly uncertain, and (2) they are based on a number of assumptions that, if changed, would result in significantly different estimates. According to DOE, further testing of sorbent injection for a variety of coals is needed to accurately assess the costs of implementing the technology throughout the United States. In addition, EPA’s and DOE’s cost estimates were published in October and November 2003, respectively, and do not reflect the more recent test data. For example, more recent field tests with chemically enhanced sorbents have shown that these sorbents may be more efficient at removing mercury than the sorbents used in earlier tests. Thus, chemically enhanced sorbents may achieve a high level of mercury removal using less sorbent and without the high capital cost of installing a fabric filter. DOE expects to issue revised cost estimates which will reflect lower costs based on recent testing. As a result, the available cost estimates may not accurately reflect the costs that power plants would incur if they chose to install mercury controls. In addition, the two agencies’ cost estimates relied on different assumptions and are not directly comparable. Most notably, the two agencies based their cost estimates on plants of different size and made varying assumptions about the percentage of time that an average plant operates (called capacity factor). For example, EPA conducted its modeling for 100- and 975-megawatt plants, while DOE based its estimates on a 500-megawatt plant. As a result, EPA provided a wider range of cost estimates. Furthermore, EPA assumed a plant capacity factor of 65 percent, while DOE assumed an 80 percent capacity factor, which resulted in higher operating costs in the DOE estimates. Additionally, based on available data for plants with an existing electrostatic precipitator that burn bituminous coal, EPA’s modeling predicted the existing control equipment would achieve a 50 percent mercury removal without sorbent injection, while DOE assumed that this configuration would remove no more than 36 percent of mercury and that sorbent injection was needed even for achieving 50 percent mercury removal. Although the DOE and EPA estimates reflect different assumptions as discussed above, we are providing the two agencies’ cost estimates for achieving a 70 percent mercury reduction at a bituminous-fired coal power plant under two scenarios (using an existing electrostatic precipitator and installing a supplemental fabric filter) to provide a perspective on the costs power plants could incur to install sorbent injection technologies. For a 100-megawatt plant using an existing electrostatic precipitator, EPA estimated that capital costs would total $527,100 ($5.27 per kilowatt, 2003 dollars), and the operating and maintenance costs would total $531,820 annually for a plant operating at 65 percent capacity ($0.93 per megawatt-hour). Alternatively, if this plant were to install a supplemental fabric filter, the capital costs would increase to about $5.8 million ($57.73 per kilowatt) and the operating and maintenance costs would decrease to $171,959 annually ($0.30 per megawatt-hour). For a 500-megawatt plant using an existing electrostatic precipitator, DOE estimated the capital costs would total $984,000 ($1.97 per kilowatt), and the annual operating and maintenance costs would total about $3.4 million ($0.97 per megawatt-hour) for a plant operating at 80 percent capacity (2003 dollars). Alternatively, if this plant were to install a supplemental fabric filter, the capital costs would increase to about $28.3 million ($56.53 per kilowatt), and the operating and maintenance costs would decrease to about $2.6 million annually ($0.74 per megawatt-hour). For a 975-megawatt plant using an electrostatic precipitator, EPA estimated that capital costs would total about $2.4 million ($2.47 per kilowatt), and the operating and maintenance costs would be about $5.1 million annually for a plant operating at 65 percent capacity ($0.92 per megawatt-hour). Alternatively, if this plant were to install a supplemental fabric filter, the capital costs would increase to about $35.4 million ($36.32 per kilowatt), and the operating and maintenance costs would decrease to about $1.6 million annually ($0.30 per megawatt-hour). These data show that DOE estimated lower capital costs per unit of power generating capacity than EPA, while EPA estimated slightly lower operating and maintenance costs than DOE. This may result from the fact that EPA assumed higher rates of mercury removal with existing controls than DOE, as well as DOE’s use of a higher plant capacity factor than EPA. Appendix VI provides additional information on EPA’s and DOE’s cost estimates for sorbent injection control technologies. According to EPA, the costs of sorbent injection technologies to control mercury emissions are very small compared to other air pollution control equipment when other retrofits, such as the addition of fabric filters, are not required. EPA also reports that the fixed operating costs for these systems are also relatively low, stemming from the simplicity of the equipment. In EPA’s rulemaking documents, the agency said that in light of the more recent tests of chemically enhanced sorbents, their earlier estimates likely overstated the actual costs power plants would incur. DOE officials said they shared this view. EPA also estimated costs for multipollutant controls, including advanced dry scrubbers. Although these controls cost substantially more than sorbent injection, they would provide additional benefits by controlling other types of pollutants such as nitrogen oxides and sulfur dioxide. EPA regarded cost information for multipollutant controls as preliminary, because there had been limited commercial experience with these technologies in the United States. In part because the agency estimated a range of capital and operating costs for each scenario, EPA’s estimates of the cost of these technologies varied widely. For example, for advanced dry scrubbers, EPA estimated the capital costs as $115.46 to $243.08 per kilowatt, with costs per kilowatt generally higher for smaller plants. For 100-megawatt and 975-megwatt plants, capital costs could be as low as $16.2 million and as high as $168.7 million respectively. EPA estimated operating and maintenance costs for a 100-megawatt plant to be between $1.1 million and $1.3 million per year, assuming a plant capacity factor of 65 percent (or between $1.93 and $2.35 per megawatt-hour). For a 975- megawatt plant, operating and maintenance costs were estimated to be between $9.3 million and $37.5 million per year, assuming a plant capacity factor of 65 percent (or between $1.68 to $6.76 per megawatt-hour). In addition to the cost estimates from EPA and DOE, we surveyed technology vendors, representatives of coal-fired power plants, and researchers about the cost of these technologies. Seventeen of these stakeholders provided sorbent injection cost information, but these estimates were incomplete and not always comparable due to site-specific variations and differing assumptions. The vendors generally provided lower cost estimates than those provided by the power industry, while estimates provided by researchers had the broadest range. EPA and DOE officials and other stakeholders identified relevant cost estimates compiled by other nongovernmental entities: Charles River Associates, an economics and business consulting firm, provided cost estimates for activated carbon sorbent injection in combination with an existing or supplemental fabric filter. Rather than presenting estimates of costs for particular plant sizes and mercury removal percentages, Charles River Associates provided formulas with variables for mercury removal and plant size. Using these formulas and a plant size of 500 megawatts, Charles River Associates’ analysis would generate estimates of total capital costs of about $749,278 for using sorbent injection with an existing fabric filter and about $20.6 million for sorbent injection and a supplemental fabric filter (1999 dollars). Operating and maintenance costs comprise a fixed cost based on plant size and a variable component that could be calculated for a range of mercury removal percentages. For example, a 90 percent mercury reduction using sorbent injection with an existing fabric filter for a bituminous coal-fired 500-megawatt plant operating at 80 percent capacity over the course of a year (7,008 hours) would cost $999,473 per year, or about $0.29 per megawatt-hour. A 90 percent reduction at the same size plant burning subbituminous coal would cost $1.3 million per year or about $0.38 per megawatt-hour. Annual operating and maintenance costs were about $75,000 higher for the configuration where a supplemental fabric filter was installed. In its modeling, Charles River Associates considered only sorbent injection technology with an existing or retrofitted fabric filter because the firm expects that this combination would have a lower cost per pound of mercury removed than sorbent injection alone. Charles River Associates’ operating and maintenance cost estimates for activated carbon injection alone are lower than the EPA and DOE estimates; however, the Charles River estimates reflect the assumption that plants already had a fabric filter, while EPA and DOE assumed plants already had an electrostatic precipitator. MJ Bradley & Associates, an engineering and environmental consulting firm, summarized costs for other multipollutant controls that have undergone full-scale testing. One technology, which uses ozone to oxidize nitrogen oxide and mercury, has been estimated to remove over 90 percent of nitrogen oxide and mercury from a plant’s flue gas; it also controls sulfur dioxide. This technology is estimated to cost between $90 and $120 per kilowatt in capital costs and $1.7 to $2.37 per megawatt-hour in annual operating and maintenance costs. For a 500- megawatt plant operating at 80 percent capacity, this would equate to $45 million to $60 million in capital costs and $6.0 million to $8.3 million in annual operating and maintenance costs. MJ Bradley also estimated the costs of a system that removes sulfur dioxide and mercury and decomposes nitrogen oxide through a multi-stage oxidation, chemical, and filter process. The target mercury removal rate for this process is 85 to 98 percent, which MJ Bradley reports the manufacturer guarantees. The estimated capital cost of this process is between $110 and $140 per kilowatt, or $55 million to $70 million for a 500-megawatt plant. A downstream fabric filter is associated with this process to remove particulate matter, which could add an additional cost. In considering the cost estimates, it is important to note that plants may identify and choose the most cost-effective option for complying with EPA’s mercury rule. The cost-effectiveness of a given mercury control will vary by facility, depending on site-specific factors, including the type and configuration of controls already installed. Furthermore, the desired level of mercury control at a plant will affect its control costs and some plants may meet their mercury reduction goals by modifying existing air pollution control equipment, thereby negating the need for additional mercury controls. In cases where plants decide to install mercury controls, the desired control level will affect the cost-effectiveness of the various technologies. For example, sorbent injection with a downstream fabric filter may prove cost effective for facilities seeking a high level of reduction, but less cost effective for plants seeking lower level reductions because of the relatively high capital costs. In the example given above for a 70 percent mercury reduction at plants burning bituminous coal, based on annualized costs, EPA’s estimates suggest it is more cost-effective for both the 100- and 975-megawatt plants to achieve that reduction without installing a supplemental fabric filter; however, DOE’s estimates suggest it is more cost-effective for the 500-megawatt plant to install the supplemental filter when accounting for the loss of revenue and increased disposal costs plants could incur from not being able to sell their fly ash. Fly ash disposal plays a role in determining the most cost effective compliance option because the plants that sell their fly ash and choose to use carbon-based sorbents may lose revenue and face increased disposal costs if they can no longer sell their fly ash. According to EPA, power plants sell about 35 percent of their fly ash for use in other applications, with 15 percent going to uses, such as cement manufacturing, where carbon contamination could pose a problem. The presence of carbon-based sorbent in fly ash may render it unusable for such purposes, particularly as a cement substitute in making concrete. Therefore, in some cases, plants using carbon-based sorbent may not be able to sell their fly ash and instead have to pay for its disposal. Plants may mitigate this problem by installing sorbent injection downstream of the electrostatic precipitator. This would, however, require the plants to install a fabric filter to collect the spent sorbent. DOE estimated that this configuration may be a cost-effective method to achieve mercury reductions for plants that wish to continue selling their fly ash, but the high capital costs of installing a fabric filter may render this choice uneconomic for some facilities. However, based on more recent tests, EPA believes that chemically enhanced sorbents can be more efficient at achieving a high level of mercury removal and may not render fly ash unusable for other purposes. Therefore, the use of these sorbents might prevent a plant from having to install a fabric filter and allow them to continue selling fly ash. Regardless of the exact magnitude of costs, 22 of the 40 survey respondents, all of the 14 policy stakeholders we interviewed, EPA, and DOE expect mercury control costs to decrease over time. Stakeholders cited a number of reasons for this belief, including the presence of a mercury rule, the expected development of a market that would lead to competition and increased demand for technologies, and anticipated improvements in technology performance as a result of innovation and experience. According to EPA and DOE officials, the most recent test results of injected sorbent technologies suggest that the cost of using these technologies will be less than these agencies estimated in 2003, stemming from advances in the sorbents. Likewise, EPA’s economic impact analysis of the mercury rule reports that the actual cost of mercury control may be lower than currently projected, since the rule may lead to further development and innovation of these technologies, which would likely lower their cost over time. In addition to the views of these stakeholders, experience with pollution control requirements under other air quality regulations also suggests that costs may decrease over time. While factors affecting the cost of mercury control technology may or may not be analogous to that of technologies to control other regulated pollutants, an examination of the cost trends for other air pollution controls shows that costs have declined over time. For example, according to EPA, the acid rain sulfur dioxide trading program was shown in recent estimates to cost as much as 83 percent less than originally projected. Furthermore, studies conducted by other researchers demonstrate that costs of air pollution control technologies have declined. For example, research conducted by Carnegie Mellon University found that the capital cost of sulfur dioxide control technology for a coal-fired power plant decreased from approximately $250 to $130 per kilowatt of electricity generating capacity between 1976 and 1995 (1997 dollars). Similarly, case studies analyzed by the Northeast States for Coordinated Air Use Management (NESCAUM) found the total operating and maintenance costs of sulfur dioxide controls decreased about 80 percent between 1982 and 1997. NESCAUM also found a reduction in the capital cost of nitrogen oxide controls, which it attributed to improvements in operational efficiency. Because data on the performance of mercury controls stem from a limited number of tests rather than permanent installations at power plants, data on the long-term performance of these technologies are limited. Furthermore, while the available data show promising results, forecasting when power plants could rely on these technologies to achieve significant mercury reductions—such as by 2008 or later—involves professional judgment. The judgment of the stakeholders we contacted varied substantially, with control vendors and some researchers expressing optimism about the potential for sorbent technologies to achieve substantial mercury reductions in the near term, while power industry stakeholders, DOE, and EPA highlighted the need for more long-term tests. Current and future DOE tests will enhance knowledge about these controls, especially on their effectiveness in removing mercury and the potential impacts they may have on plant operations. In addition, information from the power plants that plan to install mercury controls as part of settlement agreements or to meet state-level requirements could shed additional light on these issues. A number of factors complicate efforts to estimate the costs of installing mercury controls. For example, available data suggest that site-specific variables will dictate the level of expense that power plant owners and operators will incur should they install one of the available mercury control technologies. While even the current cost estimates for the most advanced of the technologies—sorbent injection—are highly uncertain for individual plants, many of the stakeholders we contacted expect these costs to decline. Further, past experience with other air pollution control regulations suggests that the costs of pollution controls decline over time due to technological improvements, the development of a market, and increased experience using the controls. Recent data already show a similar trend with respect to mercury controls. For example, EPA and DOE have stated that advanced sorbent technologies have the potential to achieve greater mercury removal at lower cost than previously estimated. Also, the emissions trading program established under EPA’s mercury rule gives industry flexibility in determining how it will comply with the control targets, enabling plants to choose the most cost-effective compliance option, such as installing controls, switching fuels, or purchasing emissions allowances. Finally, because the power industry must also further reduce its emissions of nitrogen oxide and sulfur dioxide to comply with the interstate rule, the power industry has the opportunity to cost-effectively address emissions of all three pollutants simultaneously. We provided a draft of this report to DOE and EPA for review and comment. DOE reviewed the report and said that it generally agreed with our findings. EPA’s Office of Air and Radiation and Office of Research and Development provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this letter earlier, we plan no further distribution until 15 days from the report date. At that time, we will send copies of the report to the EPA Administrator, DOE Secretary, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Congressional requesters asked us to (1) describe the use, availability, and effectiveness of technologies to reduce mercury emissions at power plants; and (2) identify the factors that influence the cost of these technologies and report on available cost estimates. To respond to these objectives, we surveyed a nonprobability sample of 59 key stakeholders in three groups, including 22 mercury control technology vendors, 21 representatives of the coal-fired power industry, and 16 individual researchers and/or government officials. We supplemented and corroborated, to the extent possible, the survey information through structured interviews with 14 stakeholders who view the reduction of mercury emissions from a policy perspective, including senior staff at EPA’s Office of Policy Analysis and Review and DOE’s Office of Fossil Energy. Finally, we interviewed vendors and researchers of mercury emissions monitoring technology to obtain and analyze information on the availability and reliability of mercury monitoring devices. Our work dealt with (1) technologies or measures that are specifically intended to control mercury emissions and (2) modifications to existing controls for other pollutants (e.g., nitrogen oxides, particulate matter, or sulfur dioxide) that are specifically intended to enhance mercury removal. We did not assess the availability, use, cost, or effectiveness of controls for other pollutants that capture mercury as a side-benefit because EPA had already conducted an extensive analysis of that topic as part of the rule development process. As a result, our work addressed only technologies specifically intended to control mercury. We did not independently test these technologies. Lastly, we focused on technologies that had advanced to the field-test stage rather than on technologies in earlier stages of testing. Most of the test data we reviewed were from full-scale tests, but the field tests of less developed controls, such as some multipollutant controls, were not full-scale. In these cases, the data were obtained from slipstream tests at power plants, where segments, rather than the entire stream, of the flue gas were diverted for testing. We relied primarily on surveys to obtain current data and professional judgment on the status of mercury controls. We developed three different surveys, one for each stakeholder group, which requested information about the availability, use, effectiveness, and cost of mercury control technologies. The scope and nature of some questions varied between the three surveys in order to reflect the varying expertise of each stakeholder group. To the extent possible, we structured the questions to facilitate comparisons between the responses of each stakeholder group. We used this format because we expected researchers, government officials, and power industry respondents to possess broad knowledge about a portfolio of mercury controls while technology vendors would have extensive information about a limited number of controls, or those that they produce, develop or sell. The most significant difference between the three surveys was that we asked technology vendors to answer questions only about the control produced, developed, or sold by each vendor, whereas the questions for researchers, government officials, and power industry respondents were not limited to one mercury control. We developed the three surveys with survey specialists between July 2004 and October 2004. We took steps in the design, data collection, and analysis phases of the work to minimize nonsampling and data processing errors. We conducted pretests of the surveys, and staff involved in the evaluation and development of mercury control technologies within EPA’s Office of Research and Development and DOE’s Office of Fossil Energy also reviewed and commented on the three surveys. We made changes to the content and format of the final surveys based on the pretests, comments of EPA and DOE officials, and comments of our internal reviewer. We followed up with those that did not respond promptly to our surveys. We also independently verified the entry of all survey responses entered into an analysis database as well as all formulas used in the analyses. We mailed paper copies of the surveys to 59 stakeholders and received 45 surveys from 40 stakeholders (68 percent response rate), which included 14 representatives of coal-fired power plants, 12 researchers and government officials, and 14 technology vendors. Because we asked technology vendors to complete one survey for each mercury control technology that they develop, produce, or sell, the number of surveys exceeded the number of respondents—five of the 14 vendors responding to our survey submitted more than one survey. Upon receiving the surveys and reviewing the questions, four stakeholders (1 power industry representative, 1 vendor, and 2 researchers/government officials) informed us that they were unable to participate. Finally, we contacted each stakeholder who did not return a survey by the deadline several times, either via email, phone, or both. We developed separate nonprobability samples for each of the three groups we surveyed, identifying stakeholders based on the extent of their expertise and involvement with the research, development, and demonstration of mercury control technologies. To compile a list of mercury control technology vendors, we spoke with DOE staff overseeing the mercury technology demonstration program to identify companies that either manufacture a mercury control technology for coal-fired power plants or research these technologies to develop them commercially. Although we excluded from the technology vendors group any company or organization that conducts research solely for evaluative or academic reasons and lacks a significant financial interest in the performance of the technology, we did include these stakeholders in the researcher and government official group. Next, we spoke with DOE and mercury technology vendors and reviewed available documents to identify the stage of testing of each company’s product(s), and we included on our list the companies whose product(s) have undergone commercial demonstrations, full-scale field tests, pilot-scale tests, or slipstream tests. We then corroborated the list of mercury control technology vendors with the Institute of Clean Air Companies, the national trade organization for air pollution control vendors, to ensure the completeness of the list of mercury control vendors. Our survey of mercury control technology vendors included a representative from each of the 22 companies we identified as meeting these criteria. We identified an initial list of 21 representatives from the coal-fired power industry to participate in our survey based primarily on a list generated from Platts’ POWERdat database of the power generators who burned the most coal in calendar year 2002, which is the most recent year of available data. We determined that this database was sufficiently reliable for this purpose. We based our selection of stakeholders on the quantity of coal burned because it correlated more closely with mercury emissions than any other available variable. We included a representative from each of the 20 generators that burned the most coal in calendar year 2002, accounting for 60 percent of the coal burned for power generation in that year in the United States. One company from this list declined to participate in our survey. Therefore, we added the next-largest company on the list. This final group of 20 generators accounted for 59 percent of the coal burned for power generation in that year. Additionally, we added one company to our group of generators—resulting in a total of 21 generators surveyed— because it had begun a commercial demonstration of a mercury control technology. Next, we corroborated our list of generators by asking representatives of the following organizations to identify contacts within the coal-fired power industry who would be knowledgeable of mercury control technologies: (1) three power companies that have actively participated in mercury control technology demonstrations; (2) the Edison Electric Institute, the trade association for electric utilities; and (3) the National Rural Electric Cooperative Association, which represents utilities serving rural communities. The power industry stakeholders identified by these three organizations all corresponded with those we had placed in the group of 21 generators. For the survey targeting researchers and government officials, we included senior agency staff involved in the evaluation and development of mercury control technologies within EPA’s Office of Research and Development and DOE’s National Energy Technology Laboratory, state government officials in states that initiated action to limit mercury emissions from power plants, and experts from companies and non- profit organizations that do research on mercury control technologies. We coordinated with the State and Territorial Air Pollution Program Administrators/Association of Local Air Pollution Control Officials, the national association of state and local air pollution control agencies, to identify nine states that had initiated actions to reduce mercury emissions from power plants and the state officials that had been involved with research and development of mercury control technologies. After speaking with representatives from these states, we eliminated one of the states because the legislation did not specifically target mercury emissions. We spoke to representatives of the following eight states: Connecticut, Illinois, Iowa, Massachusetts, New Hampshire, New Jersey, North Carolina, and Wisconsin. We recognized that the technology vendors and power industry respondents might have had concerns about disclosing sensitive or proprietary information. Therefore, although we have included a list of the survey respondents below, this report does not link individual survey responses to any particular technology vendor or representative of the coal-fired power industry. We mailed the survey to stakeholders on October 22, 2004, and asked to receive responses by November 8, 2004. Of the 59 stakeholders we contacted, the following 41 responded to our survey: American Electric Power Company, Incorporated Enerfab Clean Air Technologies (CR Clean Air Technologies) Hamon Research Cottrell Illinois Environmental Protection Agency, Bureau of Air New Hampshire Department of Environmental Sciences New Jersey Department of Environmental Protection North Carolina Division of Air Quality Scottish Power Plc (Known as Pacificorps in the U.S.) U.S. Department of Energy, National Energy Technology Laboratory U.S. EPA, Office of Research and Development, Air Pollution Prevention and Control Division Wisconsin Department of Natural Resources, Bureau of Air We supplemented and corroborated, to the extent possible, the survey information with testimonial evidence. This included structured interviews with 14 policy stakeholders familiar with the policy implications of mercury control technology research, including senior staff at EPA’s Office of Policy Analysis and Review and DOE’s Office of Fossil Energy, state and local regulatory organizations, electric utility associations, and environmental organizations. We developed a nonprobability sample for the group of policy stakeholders. We worked with a survey expert to develop a set of structured interview questions about the availability, use, effectiveness, and cost of mercury control technologies. In order to minimize nonsampling error, we took steps to ensure that the questions were unambiguous, balanced, and clearly understandable. The interview questions were similar to the survey questions, but tailored to reflect the policy expertise of the interview participants. For example, rather than asking interview participants to provide data on mercury technology demonstrations, we sought their views on the implications of mercury technology demonstrations for mercury policies. We conducted pretests of the structured interview, including one with an EPA official in the Office of Policy Analysis and Review. We made changes to the content and format of the final interview questions based on the pretests. We conducted the 14 structured interviews between November 2004 and December 2004 with stakeholders from the following 13 organizations: Clean Air Task Force Institute of Clean Air Companies National Rural Electric Cooperative Association Northeast States for Coordinated Air Use Management Regional Air Pollution Control Agency State and Territorial Air Pollution Program Administrators/Association of Local Air Pollution Control Officers U.S. Department of Energy, Office of Fossil Energy U.S. Environmental Protection Agency, Office of Air and Radiation, Office of Policy Analysis and Review U.S. Environmental Protection Agency, Office of Air and Radiation, Office of Air Quality Planning and Standards Finally, because of the important role monitoring data play in the regulation of air pollutants, we gathered and analyzed information on the availability and reliability of two kinds of mercury monitoring devices— sorbent traps and continuous emissions monitors—by conducting seven structured interviews with the technology vendors and researchers in the government and private sectors. We developed the list by consulting with EPA’s lead expert on mercury monitoring technology and then comparing it to the list of presenters at DOE’s Mercury Measurements Workshop, which was conducted in July 2004. Because this list of monitoring technology vendors primarily represented one of the two advanced mercury monitors, we included an organization regarded as a major developer of the other mercury monitoring device. Finally, we also included researchers and government stakeholders with broad knowledge of the mercury monitoring industry. We could not interview all 18 stakeholders we identified for the sorbent trap and continuous emissions monitors because of time constraints. Therefore, we decided to (1) interview four researchers and government officials, (2) interview the major producer of sorbent traps, and (3) interview a random sample of the multiple vendors involved with the eight kinds of continuous emissions monitors. Within this last group, we compiled a list of 13 mercury monitoring vendors, which was then randomized by a senior GAO methodologist. We interviewed the first 3 stakeholders on the randomized list of 13 mercury monitoring vendors in order to include their knowledge and perspectives on the industry. We were not able to reach the sorbent trap producer for an interview. We based the questions for the monitoring interviews on those posed in the mercury control technology surveys, including the same concepts and emphasizing the availability and level of demonstration of monitoring technologies, and again took steps to minimize nonsampling errors. We conducted two pretests of the monitoring interviews. Finally, we corroborated the numerical values used in questions about the accuracy and reliability of mercury monitors with EPA’s mercury monitoring expert in the Office of Research and Development. We made changes to the content and format of the final interview questions based on the pretests and the EPA official’s comments. Lastly, we identified and reviewed governmental and nongovernmental reports estimating the cost of mercury control technologies. We identified two government cost reports—one from EPA and one from DOE—and four nongovernmental cost reports. We excluded two of the nongovernmental reports from our analysis because these reports addressed cost issues that were either too limited in scope or were not germane to our research objectives. We then reviewed the results of both government reports and two remaining nongovernmental reports as part of our technology cost analysis. We took several steps to assess the validity and reliability of computer data underlying the cost estimates in the EPA, DOE, and nongovernmental reports which were discussed in our findings, including reviewing the documentation and assumptions underlying EPA’s economic model and assessing the agency’s process for ensuring that the model data are sufficient, competent, and relevant. We determined that these four reports are sufficiently reliable for the purposes of this report. As part of our effort to consider data on mercury control demonstrations and costs, we assessed compliance with internal controls related to the availability of timely, relevant, and reliable information. We also obtained data on mercury emissions. Because the emissions data are used for background purposes only, we did not assess their reliability. We performed our work between May 2004 and May 2005 in accordance with generally accepted government auditing standards. This appendix provides information on technologies that facilities may use to monitor mercury emissions, including background information on monitoring technologies and requirements under EPA’s mercury rule, as well as on the availability and cost of different monitoring technologies. In addition to technologies that control emissions, those that monitor the amount of a pollutant emitted can play an equally important role in the success of an air quality rule’s implementation. For example, effective emissions monitoring assists facilities and regulators in assuring compliance with regulations. In some cases, monitoring data can also help facilities better understand the efficiency of their processes and identify ways to optimize their operations. Accurate emissions monitoring is particularly important for trading programs, such as that established by the mercury rule. According to EPA, the most widespread existing requirements for using advanced monitoring technologies stem from EPA’s Acid Rain program. Under the program, power plants have been allowed to buy and sell emissions allowances, but each facility must hold an allowance for each ton of sulfur dioxide it emitted in a given year; furthermore, facilities must continuously monitor their emissions. According to EPA, monitoring ensures that each allowance actually represents the appropriate amount of emissions, and that allowances generated by various sources are equivalent, instilling confidence in the program. Conversely, a study by the National Academy of Public Administration found that the lack of monitoring in other trading programs led to difficulty in ensuring the certainty of emissions reductions. EPA’s mercury rule requires mercury emissions monitoring and quarterly reporting of mercury emissions data. For plants that emit at least 29 pounds of mercury annually, EPA requires continuous emissions monitoring, while sources that emit less than this amount may instead conduct periodic testing—testing their emissions once or twice a year depending on their emissions level. According to EPA, the mercury emissions from sources exempt from continuous monitoring comprise approximately 5 percent of nationwide emissions. EPA estimates that the annual impact in monitoring costs for the entire industry will total $76.4 million. EPA expects that two technologies will be available to monitor mercury emissions continuously prior to the rule’s deadline and requires continuous emissions monitoring for most facilities either by a Continuous Emissions Monitoring System (CEMS) or a sorbent trap monitoring system, while facilities that emit low levels of mercury can conduct periodic monitoring using a testing protocol known as the Ontario-Hydro Method: CEMS continuously measures pollutants released by a source, such as a coal-fired power plant. Some CEMSs extract a gas sample from a facility’s exhaust and transport it to a separate analyzer while others allow effluent gas to enter a measurement cell inserted into a stack or duct. This allows for continuous, real-time emissions monitoring. EPA estimates that a unit’s annual CEMS operating, testing, and maintenance cost would be about $87,000, while a unit’s capital cost would be about $70,000. Sorbent trap monitoring systems collect a mercury sample by passing flue gas through a mercury trapping medium, such as an activated carbon tube. This sample is periodically removed and sent to a lab for analysis. The rule requires that the average measurement of two separate sorbent trap readings be reported. Sorbent trap monitoring allows for continuous monitoring, but is not considered a real-time method. EPA estimates that a unit’s annual sorbent trap operating and testing costs would be about $113,000 per year, while a unit’s capital cost would be about $20,000. The Ontario-Hydro Method, a periodic testing method, involves manually extracting a sample of flue gas from a coal-fired plant’s stack or duct, usually over a period of a few hours, which is then analyzed in a laboratory. EPA estimates this method would cost about $12,500 a year for two tests and about $7,000 for one test. All of the stakeholders we asked about the availability of CEMS or sorbent trap systems said that the technologies were available for purchase. Furthermore, an EPA monitoring technology expert and the vendors we interviewed agreed that there were no technical or manufacturing challenges that would prevent vendors from supplying monitors to coal- fired power plants by 2008. However, some researchers identified factors that could affect vendors’ ability to supply monitors by that date, including whether vendors had sufficient production capacity to meet the industry’s demand for the equipment. All three vendors we interviewed were aware of power plants in other countries that had installed mercury monitoring equipment (including Germany, Japan, and the United Kingdom). Two respondents were aware of power plants in the United States that had permanently installed mercury monitoring equipment. Most researchers considered CEMS and sorbent trap technologies to be accurate and reliable, and the CEMS vendors also characterized their technologies as accurate and reliable. Researchers cited the need for additional testing of certain subcomponents of the continuous monitoring systems. Stakeholders were generally confident that these technologies would be able to meet proposed quality control and assurance standards by 2008, although two researchers expressed concerns that EPA’s proposed standards might be too strict for CEMS to meet. According to EPA, recent field tests have demonstrated that sorbent trap systems can be as accurate as CEMS. The rule requires the implementation of quality assurance procedures for sorbent trap monitoring systems, which EPA says are based on field research and input from parties that commented on the agency’s mercury rule during the public comment period. EPA acknowledges that there may be problems with the technology, such as the possibility of the traps becoming compromised, lost, or broken during transit or analysis, which could result in missing data; however, EPA also believes steps can be taken to minimize these possibilities. The table below summarizes data about mercury control tests, including the power plant location, duration of continuous testing, coal type, and average mercury removal. We obtained data from DOE’s National Energy Technology Laboratory and from the 40 survey respondents about field tests. The tests that have been partially funded by DOE’s National Energy Technology Laboratory are identified in the table below by an asterisk symbol. This appendix provides more detailed information on stakeholders’ views regarding the availability of the different mercury controls. Please refer back to appendix I for details about our survey methodology. Of the stakeholders that either responded to our survey (40) or participated in an interview (14), a majority (40) believed that at least one technology was currently available for purchase. As shown in table 2, many of the researchers and government officials said that activated carbon injection (8 of 12) and chemically enhanced carbon (7 of 12) are currently available, while less than half of the power industry officials also believe activated carbon injection technology is available (6 of 14). All of the vendors associated with carbon-based sorbent injection, including activated carbon (4) and chemically enhanced carbon (2), described their technology as available. In addition, 13 of the 14 policy stakeholders we interviewed— those who do not participate in technology research but are involved in the development of mercury control policy, including representatives of EPA, DOE, regional and local air pollution agencies, environmental advocacy groups, and the electric utility industry—believe that sorbent technology is currently available for purchase. The survey responses regarding the availability of other mercury controls were more limited and less optimistic than those for sorbent injection. While 40 of the 54 stakeholders answered questions about the availability of activated carbon injection, far fewer respondents answered the questions about the availability of other controls. As shown in table 3, the stakeholders who responded to questions about nonsorbent control technologies, such as multipollutant controls, mercury oxidation technologies, and enhancements to existing controls for other pollutants, were more mixed in their views about the availability of these technologies. For example, researchers and government officials expressed a range of views about mercury oxidation technologies—4 believe they are available, 3 do not think they are available, 2 did not know, and 3 chose not to answer this question. Finally, the 14 policy stakeholders we interviewed also expressed mixed views on the availability of mercury controls. Nine described various multipollutant controls as available, 5 viewed mercury oxidation as available, and 8 regarded various enhancements to existing technologies as available. This appendix summarizes the perceptions of survey respondents in the ability of mercury controls to reduce emissions under three scenarios. (Appendix I provides details about our survey methodology.) We asked survey respondents to assess their confidence in the ability of power plants to achieve mercury reductions of 50, 70, and 90 percent by the year 2008 under two different scenarios. The first scenario resembled the cap-and-trade approach recently finalized by EPA in that it asked stakeholders to consider whether the industry could use available technologies to achieve industrywide reductions of 50, 70 or 90 percent by 2008. The second scenario was similar to an alternative approach considered by EPA that would have required each plant to reduce emissions; for this scenario we asked respondents whether each individual plant could use available technologies to achieve the percentage reductions by 2008. As shown in tables 4 through 9, the confidence levels depended on the level of reduction required and by stakeholder group. Overall, the technology vendors answering this question expressed the greatest confidence, while the power industry respondents were the least confident. Within each stakeholder group, respondents expressed the greatest confidence overall in achieving a 50 percent reduction by 2008—a reduction that EPA requires under its 2010 cap—and progressively less confidence in the 70 and 90 percent scenarios. For both possible control scenarios—the national limit and facility-specific reductions—a majority of the 38 respondents expressed confidence in achieving the 50 percent reductions (see tables 4 and 5), but many lacked confidence in the feasibility of 90 percent mercury reductions by 2008 (see tables 8 and 9). Respondents expressed mixed opinions about the feasibility of 70 percent reductions by 2008, as shown in tables 6 and 7. This appendix summarizes estimates of the cost of activated carbon injection reported by EPA and DOE in October and November 2003. Environmental Protection Agency. Using modeling data provided in EPA’s cost report, we selected control cost scenarios that are comparable to those DOE presented in its cost study. These estimates include the cost of fly ash disposal for plants that use sorbent injection without a fabric filter, based on the assumption that the presence of sorbent in fly ash makes it unsuitable for sale. EPA provided capital costs in dollars per unit of generating capacity, and operating and maintenance costs in dollars per unit of electricity generated (per hour) for 100- and 975-megawatt plants operating at 65 percent capacity over the course of a year (5,694 hours). Tables 10 and 11 present the range of capital and operating and maintenance costs for the selected EPA plant scenarios; capital costs are in total dollars while operating and maintenance costs are expressed in dollars per year. EPA estimated that the capital cost of sorbent injection for a 100-megawatt plant would range from $0.17 to $59.5 per kilowatt of capacity, while operating and maintenance costs for the same plant would range from $0.001 to $2.36 per megawatt-hour. For the 975-megawatt plant, EPA estimated that the capital cost would range from $0.09 to $37.1 per kilowatt, while operating and maintenance costs would range from $0.001 to $2.32 per megawatt-hour. EPA also estimated the total annualized cost of these controls in 2003 dollars, which ranged from $0.005 to $2.64 per megawatt-hour or between $2,847 and $1.5 million per year for a 100- megawatt plant. For a 975-megawatt plant, annualized costs ranged from $0.003 to $2.45 per megawatt-hour or between $16,655 and $13.6 million per year. Capital costs were much higher for scenarios where a fabric filter was installed, while the highest operating and maintenance cost and annualized cost were for achieving a 90 percent mercury reduction for a bituminous coal-fired plant using sorbent injection without installing a fabric filter, due to the amount of sorbent needed to achieve a high mercury removal. At the low end of these costs, EPA assumed that existing equipment is sufficient to achieve a 50 percent reduction in mercury for plants that burn bituminous coal, therefore costs reflect only that of monitoring mercury emissions and do not include actual sorbent injection costs. While total capital and annual costs for the larger plant were higher than for the smaller plant, the annualized cost in dollars per megawatt-hour was actually lower, since costs were spread out over more units of capacity and electricity generated. Department of Energy. DOE’s analysis of the cost of mercury control technologies was based on field testing conducted by DOE’s National Energy Technology Laboratory. For its estimates, DOE used a hypothetical power plant of 500 megawatts burning bituminous or subbituminous coal and equipped with an electrostatic precipitator or a layout that consists of sorbent injection and a fabric filter retrofitted downstream of an existing electrostatic precipitator. Cost estimates were developed for mercury removal requirements ranging from 50 to 90 percent as shown below in table 12. DOE estimated capital costs between $1.97 and $57.44 per kilowatt. The high end of the capital cost range represented cases where facilities installed a supplemental fabric filter to achieve higher levels of mercury reduction, while the high end of the operating and maintenance costs represented achieving a 90 percent reduction in mercury emissions for a plant burning bituminous coal using sorbent injection without a fabric filter. DOE also provided two sets of annualized cost estimates, one that included a projected impact for the loss of fly ash sales and one that did not. Without a by-product impact, DOE estimated annualized costs to range from $0.37 to $5.72 per megawatt-hour, which equates to about $1.3 million to $20.0 million per year. Estimates with the by-product impact ranged from $1.82 to $8.14 per megawatt-hour, which equates to about $6.4 million to $28.5 million per year. At the high end, these estimates represented the cost of achieving a 90 percent mercury reduction at a bituminous-coal fired plant with sorbent injection, an existing electrostatic precipitator, and no fabric filter. The low-end cost without a by-product impact represented a 50 percent mercury reduction at a bituminous-fired plant using sorbent injection with an electrostatic precipitator, while the low-end cost with a by-product impact was for the same configuration and mercury reduction, but at a subbituminous-fired plant. In addition, DOE’s cost estimates suggest that plants may achieve a high level of mercury control without a fabric filter. While achieving a higher mercury removal rate without a fabric filter would require more sorbent, plants can decide what air pollution control configuration is most cost effective. Furthermore, according to EPA, test results suggest that chemically enhanced sorbent may prove more efficient than activated carbon in achieving high levels of mercury removal at relatively modest injection rates, and thus less expensive to use. According to EPA, tests of these sorbents have achieved mercury removal rates of 40 to 94 percent without a fabric filter, with the highest removal rate achieved during a continuous 30-day test (the longest reported test of these sorbents). Therefore, some facilities seeking to achieve high levels of mercury reduction may not have to incur the substantial cost of adding a fabric filter. In addition to the contact named above, Kate Cardamone, Christine B. Fishkin, Tim Guinane, Michael Hix, Andrew Huddleston, Judy Pagano, and Janice Poling made key contributions to this report. Nabajyoti Barkakati, Cindy Gilbert, Jon Ludwigson, Stuart Kaufman, Cynthia Norris, Katherine Raheb, Keith Rhodes, and Amy Webbink also made important contributions.
In March 2005, the Environmental Protection Agency (EPA) issued a rule that will limit emissions of mercury--a toxic element that causes neurological problems--from coal-fired power plants, the nation's largest industrial source of mercury emissions. Under the rule, mercury emissions are to be reduced from a baseline of 48 tons per year to 38 tons in 2010 and to 15 tons in 2018. In the rule, EPA set the emissions target for 2010 based on the level of reductions achievable with technologies for controlling other pollutants--which also capture some mercury--because it believed emerging mercury controls had not been adequately demonstrated. EPA and the Department of Energy (DOE) coordinate research on mercury controls. In this context, GAO was asked to (1) describe the use, availability, and effectiveness of technologies to reduce mercury emissions at power plants; and (2) identify the factors that influence the cost of these technologies and report on available cost estimates. In completing our review, GAO did not independently test mercury controls. GAO provided the draft report to DOE and EPA for comment. DOE said that it generally agreed with our findings. EPA provided technical comments, which we incorporated as appropriate. Mercury controls have not been permanently installed at power plants because, prior to the March 2005 mercury rule, federal law had not required this industry to control mercury emissions; however, some technologies are available for purchase and have shown promising results in field tests. Overall, the most extensive tests have been conducted on technologies using sorbents--substances that bind to mercury when injected into a plant's exhaust. Tests of sorbents lasting from several hours to several months have yielded average mercury emission reductions of 30-95 percent, with results varying depending on the type of coal used and other factors, according to DOE and other stakeholders we surveyed. Further, the most recent tests have shown that the effectiveness of sorbents in removing mercury has improved over time. Nonetheless, long-term test data are limited because most tests at power plants during normal operations have lasted less than 3 months. The cost of mercury controls largely depends on several site-specific factors, such as the ability of existing air pollution controls to remove mercury. As a result, the available cost estimates vary widely. Based on modeling and data from a limited number of field tests, EPA and DOE have developed preliminary cost estimates for mercury control technologies, focusing on sorbents. For example, DOE estimated that using sorbent injection to achieve a 70-percent reduction in mercury emissions would cost a medium-sized power plant $984,000 in capital costs and $3.4 million in annual operating and maintenance costs. If this plant did not have an existing fabric filter and chose to install one--an option a plant might pursue to increase the efficiency of mercury removal and reduce related costs--capital costs would increase to about $28.3 million, while annual operating and maintenance costs would decrease to about $2.6 million. Most stakeholders generally expect costs to decrease as a market develops for the control technologies and as plants gain more experience using them. Furthermore, EPA officials said that recent tests of chemically enhanced sorbents lead the agency to believe that its earlier cost estimates likely overstated the actual cost power plants would incur.
You are an expert at summarizing long articles. Proceed to summarize the following text: The South Florida ecosystem encompasses a broad range of natural, urban, and agricultural areas surrounding the remnant Everglades. Before human intervention, freshwater in the ecosystem flowed south from Lake Okeechobee to Florida Bay in a broad, slow-moving sheet, creating the mix of wetlands that form the ecosystem. These wetlands, interspersed with dry areas, created habitat for abundant wildlife, fish, and birds. The South Florida ecosystem is also home to 6.5 million people and supports a large agricultural, tourist, and industrial economy. To facilitate development in the area, in 1948, Congress authorized the U.S. Army Corps of Engineers to build the Central and Southern Florida Project—a system of more than 1,700 miles of canals and levees and 16 major pump stations—to prevent flooding and intrusion of saltwater into freshwater aquifers on the Atlantic coast. The engineering changes that resulted from the project, and subsequent agricultural, industrial, and urban development, reduced the Everglades ecosystem to about half its original size, causing detrimental effects to fish, bird, and other wildlife habitats and to water quality. Figure 1 shows the historic and current flows of the Everglades ecosystem as well as the proposed restored flow. Efforts to reverse the detrimental effects of development on the ecosystem led to the formal establishment of the Task Force, authorized by the Water Resources Development Act (WRDA) of 1996. The Task Force, charged with coordinating and facilitating the restoration of the ecosystem, established three overall goals to: Get the water right: restore more natural hydrologic functions to the ecosystem while providing adequate water supplies and flood control. The goal is to deliver the right amount of water, of the right quality, to the right places at the right times. Restore, protect, and preserve the natural system: restore lost and altered habitats and change current land use patterns. Growth and development have displaced and disconnected natural habitats and the spread of invasive species has caused sharp declines in native plant and animal populations. Foster the compatibility of the built and natural systems: find development patterns that are complementary to ecosystem restoration and to a restored natural system. Figure 2 shows the relationship of the agencies participating in restoration, the Task Force, and the three restoration goals. Because of the complexity of the ecosystem and efforts underway to restore it, and the urgency to begin the long-term ecosystem restoration effort, not all of the scientific information that is needed is available to make restoration decisions. As a result, scientists will continually need to develop information and restoration decision makers will continually need to review it. According to the Task Force, scientists participating in restoration are expected to identify and determine what information is needed to fill gaps in scientific knowledge critical to meeting restoration objectives and provide managers with updated scientific information for critical restoration decisions. Generally, decisions about restoration projects and plans have been—and will continue to be—made by the agencies participating in the restoration initiative. To provide agency managers and the Task Force with updated scientific information, the Task Force has endorsed adaptive management, a process that requires key tools, such as models, continued research, and monitoring plans. Federal and state agencies spent $576 million from fiscal years 1993 through 2002 to conduct mission-related scientific research, monitoring, and assessment in support of the restoration of the South Florida ecosystem. Eight federal departments and agencies spent $273 million for scientific activities, with the Department of the Interior spending $139 million (about half) of the funds. The level of federal expenditures, which increased by over 50 percent in 1997, has since remained relatively constant. The South Florida Water Management District—the state agency most heavily involved in scientific activities for restoration—spent $303 million from 1993 through 2002. The District’s expenditures have increased steadily since 1993, with significant increases in 2000 and 2002. Figure 3 shows the total federal and state expenditures for scientific activities related to restoration over the last decade. Eight federal agencies are involved in scientific activities for the restoration: the Department of the Interior’s U.S. Geological Survey, National Park Service, Fish and Wildlife Service, and Bureau of Indian Affairs; the Department of Commerce’s National Oceanic and Atmospheric Administration; the Department of Agriculture’s Agricultural Research Service; the U.S. Army Corps of Engineers; and the Environmental Protection Agency. Within the Department of the Interior, four agencies spent $139 million on scientific activities. The U.S. Geological Survey spent over half of the Interior funding, or $77 million, primarily on its Placed-Based Studies Program, which provides information, data, and models to other agencies to support decisions for ecosystem restoration and management. The National Park Service spent about $48 million for the Critical Ecosystem Studies Initiative (CESI), a program begun in 1997 to accelerate research to provide scientific information for the restoration initiative. The National Park Service used CESI funding to support research (1) to characterize the ecosystem’s predrainage and current conditions and (2) to identify indicators for monitoring the success of restoration in Everglades National Park, other parks, and public lands and to develop models and tools to assess the effects of water projects on these natural lands. Of the remaining Interior funding, the Fish and Wildlife Service and the Bureau of Indian Affairs spent $10 million and $3 million, respectively. Four agencies spent the other federal funds—$134 million. The Corps of Engineers and the National Oceanic and Atmospheric Administration spent approximately $37 million each, primarily on research activities. Two other federal agencies—the Agricultural Research Service and the Environmental Protection Agency—spent the remaining $60 million in federal funds. In addition to the $273 million spent by federal agencies, the State of Florida’s South Florida Water Management District provided $303 million for such activities from 1993 to 2002. The District spent much of its funding on scientific activities related to water projects in line with its major responsibility to manage and operate the Central and Southern Florida Project and water resources in the ecosystem. With these federal and state expenditures, scientists have made some progress in developing scientific information and adaptive management tools. In particular, scientists now better understand the historic and current hydrological conditions in the ecosystem and developed models that allow them to forecast the effects of water management alternatives on the ecosystem. Scientists also made significant progress in developing information on the sources, transformations, and fate of mercury—a contaminant that affects water quality and the health of birds, animals, and humans—in the South Florida ecosystem. Specifically, scientists determined that atmospheric sources account for greater than 95 percent of the mercury that is added to the ecosystem. In addition, scientists made progress in developing (1) a method that uses a natural predator to control Melaleuca, an invasive species, and (2) techniques to reduce high levels of nutrients—primarily phosphorus—in the ecosystem. While scientists made progress in developing scientific information, they also identified significant gaps in scientific information and adaptive management tools that, if not addressed in the near future, will hinder the overall success of the restoration effort. We reviewed 10 critical restoration projects and plans and discussed the scientific information needs remaining for these projects with scientists and project managers. On the basis of our review, we identified three types of gaps in scientific information: (1) gaps that threaten systemwide restoration if they are not addressed; (2) gaps that threaten the success of particular restoration projects if they are not addressed; and (3) gaps in information and tools that will prevent restoration officials from using adaptive management to pursue restoration goals. An example of a gap that could hinder systemwide restoration is information on contaminants, such as fertilizers and pesticides. Scientists are concerned that the heavy use of fertilizers and pesticides—which are transported by water and soil and are deposited in sediments—near natural areas in South Florida increases the discharge of chemical compounds into these areas. Contaminants are absorbed by organisms such as aquatic insects, other invertebrates, and fish that live in the water and sediment, affecting the survival and reproduction of these organisms and those that feed on them. Scientists need information on the amount of contaminants that could be discharged into the environment, the amounts that persist in water and sediment, and the risks faced by organisms living in areas with contaminants—even low levels of contaminants on a long- term basis. If this information is not available, scientists cannot determine whether contaminants harm fish and other organisms or whether the redistribution of water will introduce potentially harmful contaminants to parts of the ecosystem that are relatively undisturbed. An example of a gap that could hinder the progress of a specific project is information needed to complete the Modified Water Delivery project, which has been ongoing for many years and has been delayed primarily because of land acquisition conflicts. The Modified Water Delivery project and a related project in the Comprehensive Everglades Restoration Plan are expected, among other purposes, to increase the amount of water running through the eastern part of Everglades National Park and restore the “ridge and slough” habitat. However, scientists identified the need for continued work to understand the role of flowing water in the creation of ridge and slough habitat. If the information is not developed, the project designs may be delayed or inadequate, forcing scientists and project managers to spend time redesigning projects or making unnecessary modifications to those already built. An example of a gap in key tools needed for adaptive management is the lack of mathematical models that would allow scientists to simulate aspects of the ecosystem and better understand how the ecosystem responds to restoration actions. Scientists identified the need for several important models including models for Florida Bay, Biscayne Bay, and systemwide vegetation. Without such tools, the process of adaptive management will be hindered because scientists and managers will be less able to monitor and assess key indicators of restoration and evaluate the effects created by particular restoration actions. The Water Resources Development Act of 1996 requires the Task Force to coordinate scientific research for South Florida restoration; however, the Task Force has not established an effective means to do so, diminishing assurance that key scientific information will be developed and available to fill gaps and support restoration decisions. The SCT’s main responsibilities are planning scientific activities for restoration, ensuring the development of a monitoring plan, synthesizing scientific information, and conducting science conferences and workshops on major issues such as invasive species and sustainable agriculture. As the restoration has proceeded, other groups have been created to manage scientific activities and information for particular programs or issues, but these groups are more narrowly focused than the SCT. These groups and a more detailed discussion of their individual purposes appear in appendix I. Although the Task Force created the SCT as a science coordination group, it established the group with several organizational limitations, contributing to the SCT’s inability to accomplish several important functions. Specifically, the Task Force did not: Provide specific planning requirements, including requirements for a science plan or comprehensive monitoring plan. A science plan would (1) facilitate coordination of the multiple agency science plans and programs, (2) identify key gaps in scientific information and tools, (3) prioritize scientific activities needed to fill such gaps, and (4) recommend agencies with expertise to fund and conduct work to fill these gaps. In addition, a comprehensive monitoring plan would support the evaluation of restoration activities. This plan would identify measures and indicators of a restored ecosystem—for all three goals of restoration—and would provide scientists with a key tool to implement adaptive management. Establish processes that (1) provide management input for science planning and (2) identify and prioritize scientific issues for the SCT to address in its synthesis reports. Scientists and managers have both noted the need for an effective process that allows the Task Force to identify significant restoration management issues or questions that scientific activities need to address. In addition, a process used to select issues for synthesis reports needs to be transparent to members of the SCT and the Task Force and needs to facilitate the provision of a credible list of issues that the SCT needs to address in its synthesis reports. One way that other scientific groups involved in restoration efforts, such as the Chesapeake Bay effort, address transparency and credibility is the use an advisory board to provide an independent review of the scientific plans, reports, and issues. Provide resources for carrying out its responsibilities. Only two agencies—the U.S. Geological Survey and the South Florida Water Management District—have allocated some staff time for SCT duties. In comparison, leaders of other large ecosystem restoration efforts—the San Francisco Bay and Chesapeake Bay area efforts—have recognized that significant resources are required to coordinate science for such efforts. These scientists and managers stated that their coordination groups have full-time leadership (an executive director or chief scientist), several full- time staff to coordinate agencies’ science efforts and develop plans and reports, and administrative staff to support functions. To improve the coordination of scientific activities for the South Florida ecosystem restoration initiative, we recommended in our report—released today—that the Secretary of the Interior, as chair of the Task Force, take several actions to strengthen the SCT. First, the plans and documents to be produced by the SCT should be specified, along with time frames for completing them. Second, a process should be established to provide Task Force input into planning for scientific activities. Third, a process—such as independent advisory board review—should be established to prioritize the issues requiring synthesis of scientific information. Finally, an assessment of the SCT’s resource needs should be made and sufficient staff resources should be allocated to SCT efforts. In commenting on a draft of our report, the Department of the Interior agreed with the premises of our report that scientific activities for restoration need to be better coordinated and the SCT’s responsibilities need to be clarified. However, Interior noted that the Task Force itself will ultimately need to agree on the actions necessary to strengthen the SCT. Although Interior agreed to coordinate the comments of the Task Force agencies, it could not do so because this would require the public disclosure of the draft report. Mr. Chairman, this concludes my formal statement. If you or other Members of the Subcommittee have any questions, I will be pleased to answer them. For further information on this testimony, please contact Barry T. Hill at (202) 512-3841. Individuals making key contributions to this testimony included Susan Iott, Chet Janik, Beverly Peterson, and Shelby Stephan. The South Florida Ecosystem Restoration Task Force (Task Force) and participating agencies have created several groups with responsibilities for various scientific activities. One of these teams—the Science Coordination Team (SCT) created by the Task Force—is the only group responsible for coordinating restoration science activities that relate to all three of the Task Force’s restoration goals (see fig. 4). Other teams that have been created with responsibility for scientific activities include the Restoration Coordination and Verification (RECOVER) program teams, the Multi-Species Ecosystem Recovery Implementation Team, the Noxious Exotic Weed Task Team, and the Committee on Restoration of the Greater Everglades Ecosystem (CROGEE). As shown in figure 4, each of these teams is responsible for scientific activities related to specific aspects of restoration.
Restoration of the South Florida ecosystem is a complex, long-term federal and state undertaking that requires the development of extensive scientific information. GAO was asked to report on the funds spent on scientific activities for restoration, the gaps that exist in scientific information, and the extent to which scientific activities are being coordinated. From fiscal years 1993 through 2002, eight federal agencies and one state agency collectively spent $576 million to conduct mission-related scientific research, monitoring, and assessment in support of the restoration of the South Florida ecosystem. With this funding, which was almost evenly split between the federal agencies and the state agency, scientists have made progress in developing information--including information on the past, present, and future flow of water in the ecosystem--for restoration. While some scientific information has been obtained and understanding of the ecosystem improved, key gaps remain in scientific information needed for restoration. If not addressed quickly, these gaps could hinder the success of restoration. One particularly important gap is the lack of information regarding the amount and risk of contaminants, such as fertilizers and pesticides, in water and sediment throughout the ecosystem. The South Florida Ecosystem Restoration Task Force--comprised of federal, state, local, and tribal entities--is responsible for coordinating the South Florida ecosystem restoration initiative. The Task Force is also responsible for coordinating scientific activities for restoration, but has yet to establish an effective means of doing so. In 1997, it created the Science Coordination Team (SCT) to coordinate the science activities of the many agencies participating in restoration. However, the Task Force did not give the SCT clear direction to carry out its responsibilities in support of the Task Force and restoration. Furthermore, unlike the full-time science coordinating bodies created for other restoration efforts, the SCT functions as a voluntary group with no full-time and few part-time staff. Without an effective means to coordinate restoration, the Task Force cannot ensure that restoration decisions are based on sound scientific information.
You are an expert at summarizing long articles. Proceed to summarize the following text: Under NEPA, federal agencies generally are required to evaluate the potential environmental effects of proposed federal actions such as permitting, funding, approving, or carrying out of federal-aid highway projects. These evaluations allow decision makers to fully consider significant environmental impacts, develop and evaluate project alternatives, and consider mitigating adverse impacts before they decide whether to approve a proposed project. The NEPA process also provides a forum for ensuring that all applicable federal and state resource protection laws and requirements are addressed during the project development process, such as applicable provisions of the federal Clean Water Act and Endangered Species Act. NEPA’s implementing regulations require coordination as a means to avoid duplication and increase efficiency. For example, they require all federal agencies to “cooperate with state and local agencies to the fullest extent possible to reduce duplication.”can include, among other things, seeking input, and in some cases approvals, from the public as well as federal and state agencies responsible for natural resources, environmental protection, and historic preservation, and obtaining approval of the environmental evaluation by the lead federal agency prior to making a decision, which in some cases results in the issuance of a record of decision (ROD). In addition, the environmental review of projects FHWA has statutory and regulatory requirements for conducting NEPA reviews, which include additional highway-specific requirements and put additional emphasis on an interagency decision-making process. For instance, FHWA regulations require that each project alternative considered in the review connect logical end points (e.g., involve the development of a highway that links with existing roads) and be of sufficient length to address environmental matters broadly. FHWA regulations and policy also include provisions to coordinate with other federal agencies with jurisdiction, expertise, or interest in the project, as well as non-federal actors, including state agencies and project sponsors. In addition, FHWA’s statutory authority includes a mechanism to limit the time frames available to challenge decisions in federal court as part of the judicial review process, as well as other provisions, such as permission to fund assistance to specific groups to improve interagency coordination. Figure 1 illustrates six principles of FHWA NEPA review from its Toolkit. According to CEQ and FHWA, establishing the purpose and need for the federal action takes into account the project sponsor’s purpose and need for a project and is essential in establishing a baseline for the development of the range of reasonable project alternatives required in environmental reviews. It also assists with the identification and eventual selection of a preferred alternative. For FHWA NEPA reviews for federal- aid highway projects, the purpose and need for a proposed project might take into account the status of the project (e.g., the project’s history, funding, and schedules, as well as information on agencies involved); a discussion of highway or bridge capacity, the project’s relationship to the larger transportation system, and traffic demand; as well as social demands or economic development. In addition to the procedural requirements of NEPA, the decision to fund a federal-aid highway project also must comply with substantive environmental and natural resource protection laws, including applicable state laws. Federal environmental laws commonly applied to a proposed highway project such as the Clean Water Act (Section 404); the Endangered Species Act of 1973 (principally Section 7); Section 106 of the National Historic Preservation Act; and Section 4(f) of the Department of Transportation Act (protecting publicly owned parks, recreation areas, wildlife and waterfowl refuges, and public or private historic sites). Section 4(f) applies to DOT projects and Section 106 applies to projects with federal funding and some projects that require federal permits. Other federal natural resource protection laws— FHWA has identified over 40—can apply to federal and state or local projects, depending on characteristics of the project. For example, the Wild and Scenic Rivers Act protects designated and potential wild, scenic, and recreational rivers. State environmental laws, such as laws related to the growth-inducing effects of agency actions may also apply to a proposed highway project. States may also have laws that protect certain resources, such as those designating protected or endangered species or those protecting tribal or other cultural resources. For federal-aid highway projects, reviews are typically conducted by state DOT officials—or other project sponsors—who carry out analyses and coordinate with FHWA. FHWA generally serves as the federal lead agency for the NEPA process and approves the environmental impact documentation. Under NEPA, the level of review required depends on the potential significance of the environmental effects of the project. Projects that have the potential for a significant effect: An Environmental Impact Statement (EIS) must be prepared for a project that has the potential for a significant effect on the environment. Both CEQ and FHWA regulations identify EIS requirements related to public involvement (such as public participation in scoping, response to substantive public comment on a draft EIS, and public hearings when appropriate); interagency participation; consideration of project impacts (such as impacts on water quality or wildlife habitat); development and evaluation of alternatives; and the mitigation of adverse project impacts. Projects requiring an EIS are likely to be complex and expensive, as we have noted in prior work. The median time to complete a highway project EIS was over 7 years in 2013, according to FHWA data, and the EIS itself may cost several million dollars, as reported by FHWA. Projects that may or may not have a significant effect: When project effects are uncertain, project sponsors (such as state DOTs or local highway departments) must prepare an environmental assessment (EA) to determine whether the project may have a potentially significant impact on the human environment. An EA briefly provides evidence and analysis sufficient to determine whether to prepare an EIS or a finding of no significant impact (FONSI). A FONSI presents the reasons why the agency has concluded that no significant environmental impacts will occur if the project is implemented. FHWA regulations governing EAs require early scoping and coordination activities, as well as making the EA publicly available. EA review activities have been estimated to take from 14 to 41 months, according to reports from FHWA and AASHTO. Projects that normally will not have a significant effect: These projects, by their very nature (e.g., the project fits within a category of activities that the agency has determined normally do not have the potential for significant environmental impacts), require limited review under NEPA to ensure—by considering any extraordinary circumstances—that a proposed project does not raise the potential for significant effects. Agencies promulgate categorical exclusions (CE) for such projects in their NEPA implementing procedures. The subsequent use of a categorical exclusion for a proposed project, as described in CEQ guidance, can reduce paperwork and delay by speeding the review process. CEs typically take much less time to prepare than EAs or EISs. Environmental review activities for categorically excluded projects have been estimated to take an average of 6 to 8 months to complete, according to FHWA, and could take as long as an average of 22 months to complete, according to a report prepared for AASHTO. Federal-aid highway projects that are generally processed as CEs include resurfacing, constructing bicycle lanes, installing noise barriers, and landscaping projects. Figure 2 illustrates FHWA’s NEPA decision-making process, including the three main types of highway project environmental review documentation. For federal-aid highway projects, FHWA has found that the vast majority of projects (96 percent in 2009) qualify for environmental review as CEs, and only 1 percent require EIS reviews, as shown in figure 3 below. However, EIS projects, because of their high costs, account for a greater share of federal-aid funds than their numbers might suggest. According to CEQ and our analysis, 18 states have adopted SEPAs that require environmental reviews for highway projects (see fig. 4). The majority of SEPAs were modeled on NEPA and require state and sometimes local public agencies to assess the impacts of projects (or other actions) affecting the quality of the environment within the state.Among other things, SEPAs may expand requirements for environmental reviews to projects (e.g., state, local, or private projects) that are not required to have such reviews under federal law. Three factors—project funding sources, project characteristics, and rules allowing state adoption of federal review documents—generally determine whether a highway project needs a federal environmental review or a state environmental review, or both. Federal-aid highway projects are generally subject to environmental review under NEPA and the environmental provisions of title 23, with FHWA serving as the federal lead agency for the review. By contrast, when a project is funded solely through state or local funds, it rarely requires an FHWA NEPA review, although action by another federal agency may require an environmental review. To some extent, states can influence whether or not FHWA NEPA reviews are required by their selection of funding for a particular project. By using—or avoiding—federal-aid highway funds, states can determine whether their projects are subject to a FHWA NEPA review and even whether certain other federal environmental laws apply. For example, officials in some states told us that federal-aid highway funding was sometimes requested for a project requiring permits from other federal agencies specifically so that FHWA would serve as the federal lead agency for the NEPA review, rather than have a federal permitting agency (e.g., the U.S. Army Corps of Engineers (Corps)) serve in that capacity. Officials in two states cited their positive working relationships with FHWA, or the lack of resources at other federal agencies, to explain such decisions. As another example, California officials explained that not having federal funding or not needing a federal permit—either of which would trigger NEPA—requires them to comply with more burdensome requirements of the Endangered Species Act. They therefore would have an incentive to include federal-aid highway funds to ensure NEPA would apply. Some state officials, however, identified reasons why they may seek not to use federal-aid highway funding for certain projects. For example, state officials in Massachusetts, Minnesota, and North Carolina told us that they chose not to use federal-aid highway funding on some projects mainly to avoid certain federal review requirements, notably Section 4(f) or Section 106, which can increase project costs or require additional time for reviews to meet federal requirements.three other states (California, Maryland, and New York) focused their federal funding on certain—often large—projects, resulting in fewer FHWA NEPA reviews overall. Some state officials told us that as a practical matter, there may not always be a choice to not use federal funding, even if they might otherwise choose to do so, because limited state funding may not cover the full cost for some large projects. Although only using state funds may preclude the need to meet certain federal requirements, other federal environmental requirements, such as compliance with the Clean Water Act, largely remain for state-funded projects. SEPAs vary with respect to which project characteristics trigger an environmental review as well as what type of review is required and the scope and extent of that review. These characteristics can include thresholds related to project costs, project length, and expected service impacts such as the volume of traffic affected, among other things. These requirements contrast with NEPA, which generally focuses on the potential for significant environmental impacts. reviews are required for state projects costing $500,000 or more. Other dollar-value thresholds are built into the SEPA requirements of the District of Columbia, Georgia, and New Jersey. In Massachusetts, environmental reviews are required for the construction of new roadways 2 or more miles in length. Likewise, Minnesota requires an EIS-type review for new For example, in Virginia, road projects four or more lanes in width and 2 or more miles in length.Finally, in certain cases, states require particular environmental reviews for expected service impacts. For example, EA-type reviews are required for the construction of new roads over 1 mile in length that will function as collector roadways in Minnesota. In January 2014, however, pursuant to MAP-21, FHWA adopted a regulation allowing a project that receives less than $5,000,000 of federal funds to be treated as a CE. Almost all states with SEPAs responding to our survey (17 of 18) allow for the partial or full adoption of analyses or documentation produced in conducting federal reviews to meet state requirements for highway projects, while Massachusetts requires that a separate state review be completed. Figure 5 illustrates the number of states allowing for full adoption, in which the NEPA review fulfills the SEPA requirements; those allowing for the adoption of the federal review with additional state analyses or documentation (i.e., partial adoption); and those requiring that state requirements be met separately from the FHWA NEPA review. Several factors may affect whether a NEPA document can be accepted as a SEPA document in those states that allow for full or partial adoption. For example, state DOT officials reported that their ability to use a NEPA review to meet SEPA requirements varies based on the project sponsor, the type of review, and whether a similar type of environmental review document is required to satisfy federal and state environmental review requirements. Finally, several states carry out and adopt federal analyses or documentation for state projects even in the absence of federal funding or permitting. Maryland DOT officials told us that in practice they adopt federal FHWA NEPA reviews for almost all highway projects—even those with only state funding—given staff familiarity with the federal requirements and public expectations, as well as the potential for funding sources to change between the preliminary engineering and construction phases of a project. Washington DOT officials also mentioned that using and adopting NEPA could be advantageous because they felt more certain about how FHWA’s NEPA requirements would be interpreted by courts should a legal challenge to the environmental review process be subsequently filed. While the majority of states with SEPAs allow for the full adoption of federal NEPA reviews, some do not (see fig. 5). When separate federal and state reviews are required, the processes are often carried out concurrently, with joint planning processes, research and studies, and public hearings, as well as the use of blended documents. Both Montana and New York reported having integrated processes for state and federal reviews, for example. Likewise, officials with California’s DOT (Caltrans) stated that Caltrans has a long-standing practice of combining NEPA and the state’s SEPA processes to make the delivery of transportation projects more efficient. For example, Caltrans’s guidance for a blended or joint NEPA/SEPA EIS/EIS-type review describes use of a special chapter in the joint document to address required California-specific mitigation information. Finally, Hawaii’s environmental review statute requires coordination of state and federal reviews when both apply. The extent to which state SEPAs, like NEPA, require coordination is discussed in the next section of this report. When a project sponsor requests that a state review be adopted to satisfy federal requirements, FHWA conducts a legal sufficiency review to ensure that the analysis and documentation satisfy FHWA’s NEPA requirements, according to an official with FHWA’s Office of Project Development and Environmental Review. This legal review may happen when, for example, federal funding is added to an ongoing state project or when project requirements change, and a federal permit that was not originally required must now be obtained. State officials in several states we spoke with pointed to the potential for duplicative effort when ongoing state projects are subsequently “federalized” in this manner. According to the FHWA official we spoke with, a case-by-case assessment is necessary because each situation is different when projects have been federalized. To avoid the risk of having to start the federal environmental impact assessment late in the project development process, FHWA encourages project sponsors to follow FHWA’s NEPA process from the beginning of project development. For this reason, officials in five states— Hawaii, Maryland, North Carolina, Washington, and Wisconsin—told us they preferred to review projects under FHWA NEPA—even in the absence of federal funding. A majority of states we surveyed reported having state review document types that are similar to those used for NEPA reviews (see fig. 6). Most states (16 of 18) reported requiring documents that are similar to a NEPA EIS or a NEPA EA, although fewer states had similar documents for FONSI/EAs or CEs, and the level of analysis required varies by state, as discussed in more detail below. While most states reported having an EA-type document, some state officials told us that their documentation requirements for that process can differ from the documentation requirements for an FHWA EA. FHWA’s EAs are to include brief discussions of the need for the proposed project, project alternatives, the environmental impacts of the proposed project and alternatives, and a listing of the agencies and persons consulted. Officials in six of nine states where we conducted interviews told us they use a checklist for their EA-type reviews. For example, Washington DOT’s checklist asks for narrative responses to 12 pages of questions on topics like water, plants, and historic and cultural preservation. Instructions direct the project sponsor—often the state DOT—to answer the questions briefly with the best description possible and to say “does not apply” if that is the case. As a result, these reviews may look more like a documented federal CE than a federal EA. Further, in some instances, these checklist reviews can involve less analysis than a federal EA. A Washington DOT official told us that the checklist allows impacts to not be assessed if they are deemed “not applicable.” By comparison, a federal EA requires more detailed documentation of findings. In Massachusetts, the state DOT prepares a 22-page checklist called an environmental notification form for projects meeting set criteria. This document has attached plans and is publicly circulated. Depending on whether the project exceeds certain thresholds, the document may require responses to questions, according to Massachusetts DOT officials. By contrast, New York requires more specific analysis as its regulations require the state DOT to prepare a negative declaration that can be supported by a NEPA FONSI. This document, a determination of no significant effect, must identify all the relevant areas of environmental concern and show why the project impact, if any, is not significant. To compare state environmental review requirements to FHWA’s NEPA requirements for federal-aid highway projects, we reviewed relevant state statutes and regulations for each of the18 states that we identified as having a SEPA required for highway projects and compared those statutes and regulations with NEPA and FHWA regulations. In addition, we surveyed state DOTs in each of those 18 states about the degree of similarity between state requirements and federal requirements identified in FHWA’s NEPA Toolkit for five of FHWA’s six NEPA principles: assessment of project impacts, development and evaluation of project alternatives, mitigation of adverse project impacts, interagency coordination, and public involvement. Seventeen of the 18 surveyed states provided responses to this section of our survey. We used our legal analysis and interviews with state officials to supplement and confirm survey responses, as well as to provide illustrative examples of how state requirements or practices compare with NEPA and FHWA requirements. To identify which states have requirements that were “generally similar” to FHWA’s NEPA requirements overall, we determined which states in our survey reported having environmental review requirements that were similar or somewhat similar to 42 individual requirements for FHWA NEPA reviews under the five NEPA principles. Based on our legal analysis and survey responses, for each of the five NEPA principles the majority of states have requirements that are generally similar to FHWA’s NEPA requirements overall. More specifically, as shown in figure 7, for each of the five areas, survey results indicated that requirements in 10 or more of the 17 state SEPAs were generally similar to FHWA’s NEPA requirements overall. Some similarity between state and federal requirements is to be expected since a majority of SEPAs were modeled on NEPA, as we discussed above. Further, some states may employ more stringent review processes in practice than state statutes and regulations require in order to satisfy public expectations or for other reasons. Based on our legal analysis and survey responses, we found that a number of states have SEPA requirements that are generally less stringent, however. In fact, survey results suggest that the divergence between state and FHWA’s NEPA requirements may be greatest for the NEPA principle of alternatives analysis, where 7 (of 17) states had requirements they characterized as generally less stringent. Overall, no state reported having requirements more stringent than FHWA NEPA for more than 4 (of 42) individual requirements within the five NEPA principles included in our analysis. We discuss each of these NEPA principles—and the related requirements identified in the Toolkit—in more detail below. In some cases, the divergence between state and federal requirements is more pronounced depending on the type of review being conducted. According to survey responses, state requirements for documentation and analyses are more likely to mirror FHWA’s NEPA requirements for higher level, more complex SEPA reviews (e.g., EIS-type reviews) than for less complex reviews. This difference reflects, in part, the differences among various document types mentioned above, including the use of checklists for EA-type assessments in many states. Potential environmental impacts for the vast majority of projects at both the state and federal level are evaluated using CE or EA requirements, as only a small proportion of projects typically requires an EIS review, according to FHWA and state officials we interviewed. Where state and federal laws diverge, there is the potential for meaningful differences in how significant impacts are assessed or mitigated, which project alternative is selected, the level of interagency coordination, and opportunities for the public to affect or challenge decisions, among other things. Most state requirements include some consideration of impacts, development and evaluation of alternatives, and mitigation for the analyses that inform environmental review documents, but the degree of similarity to FHWA’s NEPA requirements varies, and states generally lack protections comparable to federal parkland and historic preservation protections. For each of the 5 individual requirements related to the consideration of project impacts, from 10 to 13 (of 17) states reported in our survey that their requirements for analyzing the environmental impacts of a proposed highway project are similar to FHWA’s NEPA requirements, although other states reported having less stringent requirements. Figure 8 illustrates this variation among states for each of the five requirements associated with the consideration of project impacts. Impacts analysis provides decision makers with the information necessary to determine whether a proposed action will produce significant adverse impacts in certain identified areas, including impacts that are short-term, long-term, and cumulative. In our survey, 10 (of 17) states reported similarities with FHWA’s NEPA requirements to consider cultural or social impacts and to assess impacts relating to social and economic justice, but officials we interviewed in 3 states identified consideration of impacts related to social and environmental justice as a key difference between state requirements and FHWA’s NEPA requirements. (See fig. 8.) These requirements assess potential effects on certain minority or low-income populations in FHWA NEPA reviews, among other things. For example, Caltrans officials explained that usually they are not required to look at social or economic impacts unless these impacts are triggered by a physical impact, which differs somewhat from federal requirements to address these impacts separately.Similarly, requirements in Wisconsin and Washington for assessing social justice impacts are less stringent than federal requirements, according to state DOT officials, although in practice DOT procedures for analysis in both states align with what is required by FHWA and NEPA. Variations in state substantive environmental statutes and regulations (e.g., those managing growth or protecting certain species) can affect the determination of significant impacts. For example, North Carolina officials described substantive state environmental statutes and regulations that are more stringent than federal protections, such as higher permit standards to protect trout waters and stricter navigation requirements for public use of waterways. These statutes and regulations affect what must be considered during impacts analysis. Such requirements, while not part of SEPAs themselves, can affect the evaluation of project impacts when they are included in the SEPA or NEPA review processes. According to our legal analysis, Georgia regulations are less stringent because although they require consideration of the cumulative impacts of other proposed government actions, they do not address the actions of nongovernmental entities, for example. For each of the five individual requirements related to the development and evaluation of project alternatives, from 9 to 11 (of 17) states reported in our survey that their respective requirements are similar or more stringent, although notable differences between state and FHWA’s NEPA requirements affect both the assessment of alternatives and the selection of the preferred alternative in some states. Figure 9 illustrates this variation among states for each of the five requirements associated with the development and evaluation of project alternatives. FHWA regulations require consideration and objective evaluation of all reasonable project alternatives to avoid any indication of bias toward a particular alternative, including the “no action” alternative. In our survey, 10 (of 17) states reported similarities with FHWA’s NEPA requirements for the identification of a range of reasonable alternatives. (See fig. 9.) In our legal analysis, we found that the District of Columbia requires that EIS-type reviews include a discussion of reasonable alternatives to the proposed governmental action, including the option to take no action. However, about a third of the SEPA states reported less stringent standards for identifying alternatives in our survey, and we found in our legal analysis that just over half of states with SEPAs (10 of 18) require assessment of the “no action” alternative. In Minnesota, for example, only the preferred alternative is assessed for EA-type reviews, which, as state DOT officials agreed, is less stringent than FHWA’s NEPA requirements as it may preclude the consideration of some alternatives, although the state does require analysis of the no-action alternative for EIS-type reviews. In addition, we found that some states do not have detailed requirements for the consideration of alternatives, and two states— Virginia and Indiana—do not include requirements for the identification of alternatives in their laws or regulations. FHWA policy provides that federal-aid highway decisions should be made “in the best overall public interest based upon a balanced consideration of the need for safe and efficient transportation; of the social, economic, and environmental impacts of the proposed transportation improvement; and of national, state, and local environmental protection goals.” 23 C.F.R. § 771.105(b). similar to FHWA’s NEPA requirements, and that in practice state officials select the least environmentally damaging practical alternative as the preferred alternative, although the selection process differs from the FHWA NEPA process. The extent to which states have requirements to consider mitigation of environmental impacts as part of their environmental review process that are similar to FHWA’s NEPA requirements often varies by the type of review, and 4 of the 17 states reported in our survey that they do not have mitigation requirements similar to FHWA’s NEPA requirements for any level of review. Figure 10 illustrates this variation among states for each of the seven requirements associated with considering mitigation of environmental impacts. Mitigation is intended to avoid or minimize any possible adverse environmental effects of an action where practicable. FHWA’s NEPA regulations require consideration of mitigation regardless of whether the impacts of a proposed project are found to be significant, and the regulations require implementation of mitigation measures if doing so represents a reasonable public expenditure. Based on our legal analysis of state requirements for mitigation, we found that some state requirements are similar to FHWA’s NEPA requirements, but may include different terms (potentially altering the thresholds for the consideration of and implementation of mitigation) or use different processes for the evaluation or identification of mitigation. For example, Hawaii’s laws require mitigation to be considered as part of its alternatives analysis, but this consideration is only required when mitigation measures are proposed. Moreover, an official in FHWA’s Hawaii Division Office told us that environmental mitigation measures, once considered, are not binding under Hawaii’s SEPA. As mentioned above, different sections of the Endangered Species Act may apply, depending on whether the project involves federal authorizations or funding. regard to the consideration of project impacts on resources that were eligible for protection, but were not yet on the federal registry of protected resources. Conversely, specific laws in some states provide protection that may be more stringent than what is required under federal law for certain resources. For example, Hawaii has an historic protection requirement for certain cultural resources that goes beyond the requirements in Section 106. Having less stringent or no state requirements for parkland protection and historic preservation may affect whether a project is determined to have significant impacts and therefore whether, for those states that require mitigation, those impacts are considered and mitigated. In addition, less stringent state requirements can provide an incentive to avoid using federal funds for a project, according to state officials in 3 of the 9 states where we conducted in- depth interviews. For each of the 13 individual requirements related to interagency coordination of environmental reviews, from 8 to 12 (of 17) states reported in our survey that their respective requirements are similar or more stringent. Figure 12 illustrates the variation among states for each of the 13 requirements associated with interagency coordination for environmental reviews. Under NEPA, FHWA, as the lead federal agency, is required to coordinate the timing and scope of environmental reviews to develop consensus among a wide range of stakeholders with diverse interests. These coordination requirements are intended to make the review process more efficient, eliminate duplication, and reduce delays, by including tribes, businesses, transportation or environmental interest groups, resource and regulatory agencies, neighborhoods, and affected populations, among others, in the environmental review process. As part of state requirements for interagency coordination, some states encourage cooperation and consultation with federal or state agencies, among others, but state requirements vary in how similar they are to FHWA’s NEPA requirements. (See fig. 12.) Specifically, state requirements regarding the coordination of outreach to specific populations—such as underserved or minority groups—are often less stringent than FHWA’s NEPA requirements. In fact, 9 (of 17) state DOTs we surveyed reported that for EAs, their state requirements for coordinating outreach to specific populations are less stringent than FHWA’s NEPA requirements, with 8 (of 17) state DOTs reporting less stringent requirements for EIS-type reviews. In states reporting less stringent requirements, there may be less involvement from minority or underserved populations, affecting how and whether potentially affected populations are involved in the review of proposed projects. In our legal analysis, we found that a few states had requirements that corresponded to the “single-process review” included in FHWA’s NEPA requirements to promote efficiency and avoid delays by including insofar as practical, the completion of all environmental permits, approvals, reviews, or studies as part of the NEPA process, including those by other federal agencies such as permits from the Corps under the Clean Water Act. Some state officials we interviewed told us that their state encouraged, but did not require, cooperation and consultation with federal or state agencies during the environmental review process. For example, while not requiring the completion of all permits or reviews, Washington’s SEPA requires that environmental reviews include a list of all licenses (e.g., permits) that will be needed for the project. State DOT officials in several states told us that in practice they employ systems that may go beyond minimum state requirements to coordinate with federal and state agencies, including FHWA, for the review of highway projects. These officials described various state efforts to develop consensus among stakeholders, ranging from regularly scheduled meetings to the use of state clearinghouses to ensure timely stakeholder receipt of documentation for comment. For example, Wisconsin DOT officials have developed an agreement with the state’s Department of Natural Resources to meet and coordinate on impacts analyses prior to issuing a draft EIS-type document. Also, in North Carolina, regulations empower the state agency responsible for compliance with the state’s SEPA to seek information from federal as well as local and special units of government. According to North Carolina officials, sometimes a formal interagency process is used, even in the absence of a federal NEPA review, because it allows for better coordination. State environmental review requirements for public involvement are generally similar to FHWA’s NEPA requirements overall, although public involvement requirements for EA-type reviews varied. Figure 13 illustrates the variation among states for each of the 5 EA-type review requirements and 7 EIS-type review requirements associated with public involvement. FHWA’s NEPA requirements allow for robust public involvement in the NEPA process, requiring reasonable notice of and an opportunity to participate in public hearings, where appropriate, and an adequate and meaningful opportunity to submit comments. State responses to our survey indicated that public involvement requirements vary, ranging from states that have no requirement to allow public involvement, to others that may have more stringent requirements than FHWA’s NEPA rules. Moreover, we found as part of our legal analysis that requirements allowing public participation for state EIS-type reviews are more likely to parallel FHWA’s NEPA requirements than do such requirements for state EA-type reviews. Conversely, public involvement requirements for EA- type reviews in Wisconsin may exceed FHWA’s NEPA requirements in some circumstances as the state’s SEPA allows for additional hearings by request, while New York officials told us that they are not required to conduct a public hearing for either EA-type or EIS-type reviews. In several other states, hearings for EIS-type reviews are not automatic. Individual state requirements for notification and circulation of draft documents for comment also vary, as do requirements for public hearings. (See fig. 13.) While only 3 (of 17) states surveyed reported less stringent requirements for disseminating draft EIS-type documents than required by FHWA NEPA, 7 (of 17) states surveyed reported less stringent requirements for draft EA-type documents. By contrast, Massachusetts state officials reported having more stringent requirements for providing and circulating draft EA-type documents because the state posts environmental review documents on a public website and requires a written response to all comments by the lead agency. Some states have less stringent requirements governing public involvement, particularly regarding public hearings for EA-type reviews in which 10 (of 17) states surveyed reported less stringent requirements. In our legal analysis, we found that while Indiana requires EIS-type reviews to be made publicly available, it does not require the transportation agency to seek or respond to public comments on draft versions of these documents, and Virginia’s law contains no specific public notice and comment procedures for environmental review documents. States with less stringent formal public participation requirements may in practice align with FHWA. Officials in three states told us that they match the higher standard of FHWA’s NEPA public involvement requirements for state-only reviews to meet public expectations, even if less was required by state law. According to state officials in North Carolina, for example, this could mean holding a public hearing and addressing comments for a state-only EA when neither step is required by state law. According to New York state DOT officials, they conduct public hearings for EA- and EIS-type reviews, even though state law is less rigorous than NEPA and does not require hearings. Additionally, some state officials we interviewed reported practices that served to encourage public involvement, such as Washington’s on-line registry of ongoing and completed reviews, which allows citizens or groups to search for projects by location. Officials in 4 (of 18) states expressly identified instances of federal–state duplication in environmental review processes. For purposes of this report, we focused on duplication that might occur between state and federal processes where there is duplication of effort. In those instances where state officials identified duplication, it resulted either from supplemental state requirements or from the misalignment of federal and state environmental review documents, according to state officials. Officials in Washington reported duplication from both of these causes. Two of the four states reporting potential federal–state duplication in our survey, Maryland and Washington, include a SEPA EA-checklist in addition to their federal review documents. State officials in both states explained that completing the checklist may be duplicative because it includes information that is similar to the information in the FHWA review, but noted that doing so is not burdensome in terms of time or resources. More specifically, in Washington, state officials said that the checklist serves a “due diligence” role for the state’s SEPA, while Maryland officials said the checklist is used to scope EIS-type reviews during the beginning of the review process. Three states we surveyed, including Washington, reported that the lack of alignment between required state and federal document types for the same project could result in additional effort. For example, both Massachusetts and Minnesota sometimes use a different state review type for some projects than would be required for the FHWA NEPA process, depending on characteristics of the project. Consequently, in Minnesota, some projects are reviewed with a state EA-type process, while being categorically excluded by FHWA. In these situations, parallel efforts may be required to satisfy the different requirements— which may be more stringent for one of the required reviews— precluding the use of a blended process or combined document to coordinate similar processes. Washington has recently completed a rulemaking to align federal and state CEs to avoid this type of duplication. Ten (of 18) states responding to our survey reported that there was no duplication in either the procedural steps or substantive tasks required by state or federal environmental review requirements. In seven of these states, adopting a NEPA review fulfills SEPA requirements, and state officials pointed to this adoption as the reason for reporting no duplication in their survey responses. Connecticut and Hawaii allow for the adoption of the federal NEPA review as long as the review meets certain state SEPA requirements, according to state officials. Montana and New York reported that their processes are not duplicative because they have integrated or blended processes to meet both sets of requirements concurrently. According to state and federal officials we interviewed, several other reasons contribute to minimal duplication between state and federal processes. The development of required state and federal environmental review documents is typically carried out by the same state officials (or other project sponsors), who can use analyses for different purposes without replicating effort when federal and state requirements are similar. Finally, state and federal review processes frequently require—or encourage—coordination, as mentioned above, and officials pointed to this coordination as a reason for the lack of duplication. For example, officials in North Carolina described meetings held every 2 months involving state and federal officials from the transportation and resource agencies that are typically involved in environmental review. At these meetings, officials are able to coordinate state and federal efforts, among other things. We previously found that FHWA does not collect information on the cost of environmental reviews, and during the course of this review, FHWA officials at the headquarters and division levels confirmed that such data are not collected. Likewise, state officials we surveyed reported no information on the cost to states of any federal–state duplication, and, as mentioned above, officials in those states identifying instances of federal– state duplication from the use of SEPA checklists described the additional effort as being negligible, although they were unable to quantify any additional costs. During the course of our review, we identified other examples of potential duplication or overlap, but these did not result from the interaction of federal and state environmental review requirements. In addition to the four states reporting federal–state duplication, four additional states reported other examples of potential duplication or overlap. For example, state DOT officials in Maryland noted that there may be potential duplication resulting from the need to keep federal reviews up to date for projects taking a number of years, either because initial reviews have expired or because environmental conditions—or requirements— have changed. In our survey, three states reported potential duplication—or overlap—when federal permitting agencies (e.g., the Corps) carried out additional analyses because they did not accept the FHWA-approved NEPA review. CEQ officials noted, however, that while the goal of the NEPA process may be that there is one NEPA review and approval process, often the level of detail required by one agency (e.g., FHWA) to review a proposed decision under NEPA may be different than what is required by another agency to issue a permit (e.g., the Corps or the Coast Guard). Officials with Caltrans reported potential duplication in our survey, but did not provide examples of the cause or type of duplication. When we interviewed Caltrans officials, they explained that while additional analyses may be required for the state’s SEPA, there is no duplication of effort caused by the interaction of the state and federal requirements given the agency’s blended review process. Separately, officials with the California State Association of Counties contacted us regarding potential duplication in the development of state and federal reviews at the local level. Under the state’s SEPA, local governments prepare both state and FHWA NEPA environmental reviews for local projects, but they can approve only the SEPA reviews. FHWA or Caltrans has approval authority for FHWA NEPA reviews. County officials stated that they are not able to do a blended document given the different reviewers, a situation that results in potential duplication, adding time and additional cost to projects. We spoke with officials with the National Association of Counties, and they stated that this concern has not been raised by county officials outside of California. All 18 states we surveyed reported having agreements with FHWA or other federal agencies to improve coordination or make environmental review processes more efficient. Nearly all of these states (16 of 18) provided examples of programmatic agreements, such as those allowing state officials to review and approve CE determinations for FHWA or other efforts to improve interagency coordination for conducting reviews and obtaining permits. Many programmatic agreements and other improvement efforts were developed as part of FHWA’s Every Day Counts initiative, which is an effort to identify and deploy innovation aimed at shortening project delivery, enhancing the safety of roadways, and protecting the environment. Examples of FHWA or state-led improvement efforts identified by state officials we surveyed and interviewed include the following: Two states in our survey have developed an interagency process to plan and review highway projects under the auspices of the Section 404 merger process to reduce inefficiencies in assessment and permitting under the Clean Water Act. For example, officials in North Carolina told us that the state has been working to make environmental reviews more efficient through its Section 404 merger process since the 1990s, through which the state DOT coordinates with FHWA, the Corps, the U.S. EPA, and several other state and federal agencies, including the state’s Department of Environment and Natural Resources. Three states in our survey reported having liaison positions with other state or federal agencies to alleviate resource challenges and to improve interagency coordination. For example, Texas funds a liaison position at U.S. Fish and Wildlife Service. Massachusetts also funds such positions with the Corps—in collaboration with FHWA—and the Massachusetts Departments of Environmental Protection and Fisheries and Wildlife. According to Massachusetts DOT officials, having a liaison with the Corps has facilitated coordination, reduced the time needed to review permit applications, and allowed for the use of more proactive protection for some endangered species, including turtles.pointed to improved coordination by funding positions in coordinating agencies. Officials in two other states, California and Wisconsin, also Eleven (of 18) states in our survey reported having programmatic agreements under Section 106 (assessing impacts on historic properties) with FHWA and other federal agencies. According to FHWA, Section 106 programmatic agreements are one way to expedite the environmental review process, while protecting and enhancing the environment. These agreements authorize state DOTs to conduct all or some Section 106 reviews on behalf of FHWA, when such reviews are required. Individual states have efforts to improve processes, as well. For example, officials with the California Office of Planning and Research worked with CEQ to develop a handbook on the interaction of state environmental review requirements and NEPA to smooth and better coordinate the dual reviews that are often required, which was released in 2014. According to a state official, many of the challenges in coordinating the two processes stemmed from a lack of understanding of the other requirements, and the state worked with CEQ to develop a guide to explain key differences and to define terminology. Eleven (of 18) states responding to our survey reported benefits from efforts to improve coordination or to make state and federal processes more efficient, most notably decreased time frames. Other benefits included increased public involvement, increased agency engagement, and decreased costs. State officials were unable to quantify these benefits. State officials also pointed to increased certainty in project timelines, costs, and processes, as well as improved coordination with other agencies and tribes. We provided a draft of this report to the U.S. Department of Transportation (DOT) and the Council on Environmental Quality (CEQ) for review and comment. DOT and CEQ provided technical corrections about federal and state environmental review requirements, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Transportation, and the Chair of the Council on Environmental Quality. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact us at (202) 512-2834 or [email protected] or at (202) 512-6417 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. As discussed in the body of this report, we identified state statutes, regulations, and orders in the 18 states where review of highway projects is required under a state environmental policy act (SEPA), and we compared those requirements to the federal requirements for federal-aid highway projects under the National Environmental Policy Act (NEPA) and implementing regulations issued by the Council on Environmental Quality (CEQ) and the Federal Highway Administration (FHWA). We focused our comparison on 12 key elements characterizing NEPA programs, which we developed in consultation with CEQ and FHWA. This appendix summarizes the results of our review of state legal requirements, not state practices, as discussed in the body of this report, and the examples are illustrative and are not intended to describe all aspects of each state’s SEPA program. The key purposes of NEPA include: declaring a national policy which will encourage productive and enjoyable harmony between man and his environment; and promoting efforts which will prevent or eliminate damage to the environment and biosphere and stimulate the health and welfare of man and enriching understanding of the ecological systems and natural resources important to the nation. 42 U.S.C. § 4321. A majority of SEPAs establish objectives that are at least somewhat similar to these NEPA goals, although a few states have more limited policy goals. Many states’ SEPAs establish a state policy that encourages harmony between humans and their environment or that enriches understanding of the natural environment. In addition, SEPAs generally establish a state policy requiring that environmental concerns be evaluated in connection with state-funded or otherwise state- supported projects, although several do not. Several SEPAs establish a policy of avoiding or mitigating environmental damage. Others explicitly refer to the desirability of informing and involving the public in environmental decision making, and some SEPA purpose-and-policy statements specifically refer to management of natural resources, waste disposal or maintenance of the public health. A few SEPAs address local functions such as land use management and zoning. Finally, a few SEPAs have other declared objectives such as strengthening the state economy (Georgia, see GA. CODE ANN. § 12-16- 2(1)) or supporting the right to use and enjoy private property (Montana, see MONT. CODE ANN. § 75-1-102(2)). NEPA generally requires review of “every recommendation or report on proposals for legislation and other major Federal actions significantly affecting the quality of the human environment.” 42 U.S.C. § 4332(2)(c). Our review looked at how NEPA applies to projects authorized under Title 23 of the U.S. Code, consisting primarily of federal-aid highway projects where FHWA provides funds to state or local governments and serves as the federal lead agency for the NEPA review. FHWA’s review also includes supplemental environmental requirements contained in Title 23. When a project is funded solely with state or local funds, it rarely requires an FHWA NEPA review. Federal permitting requirements also can trigger a NEPA review, however, even when only state or local funds are used such as when U.S. Army Corps of Engineers (Corps) or U.S. Coast Guard (Coast Guard) approval is required due to the presence of a wetland or navigable waterway. In those cases, the Corps or Coast Guard could serve as the federal lead agency. SEPAs vary with respect to which state or local “actions” trigger review. The California environmental assessment statute, for example, generally applies to any activity of any public agency that may have a substantial environmental impact, including highway projects. “Public agencies” include state agencies, boards and commissions, as well as local agencies including counties, cities, regional agencies, public districts, redevelopment agencies and other political subdivisions. CAL. PUB. RES. CODE § 21062. “Projects” include activities undertaken directly, financed in whole or in part, or requiring approval by a government agency and are generally any activity subject to the state statute. CAL. PUB. RES CODE. § 21065. Certain projects in California have been excluded from review, however. See, e.g., CAL. PUB. RES. CODE. §§ 21080.01-21080.14. By comparison, Massachusetts’ SEPA covers all projects undertaken or financially assisted by a government agency, including any authority of any political subdivision. MASS. GEN. LAWS, ch. 30, §§ 61, 62. Minnesota requires an environmental impact study where there is potential for significant effects from major government actions, defined to include local and municipal agencies. MINN. STAT., § 116D.04, Subd. 1a(e). Some states’ SEPAs do not apply to locally managed and funded projects, for example Georgia (GA. CODE ANN. § 12-16-3(5)) and Maryland (Md. CODE ANN., NAT. RES. § 1-301(e)). Maryland’s SEPA also focuses primarily on providing environmental assessments to the state legislature. Md. CODE ANN., NAT. RES. § 1-301(d). One state’s SEPA—North Carolina’s—does not apply to local units of government unless they elect to be covered. See N.C. GEN STAT. § 113A-8(a). Finally, SEPAs vary with respect to which project characteristics trigger an environmental review, as well as what type of review is required, and can include thresholds related to project costs and physical length, project use, and geographic area, among other things. These thresholds differ from the federal triggers for the type of review, which generally focus on the potential for significant environmental impacts rather than the scale or size of the project. For example, in Virginia, reviews are required for state projects costing $500,000 or more. VA. CODE ANN. § 10.1-1188(A). Other dollar-value thresholds are built into the SEPA requirements of the District of Columbia, Georgia, and New Jersey. Some states, such as Massachusetts, require environmental impact reviews for the construction of new roadways 2 or more miles in length. 301 MASS. CODE REGS., tit.301, § 11.03(6)(a). Likewise, Minnesota requires an Environmental Impact Statement (EIS) for new road projects four or more lanes in width and 2 or more miles in length. Minn. R. 4410.4400, Subp. 16. Minnesota also requires certain types of reviews for new roads over 1 mile in length that will function as collector roadways. Minn. R. 4410.4300, Subp. 22. New York’s SEPA does not apply to projects within the jurisdiction of the Adirondack Park Agency, N.Y. COMP. CODES R. & REGS., tit. 17, § 15.2(l)(3), and no EIS is required under the District of Columbia’s SEPA if the project is located in what is known as the Central Employment Area, an area including but not limited to federal government facilities. D.C. CODE § 8-109.06(a)(7). Finally, some states, such as Indiana, do not extend coverage to state licensed or permitted projects, see, e.g., IND. CODE § 13-12-4-8, while Texas’ SEPA applies only to certain transportation projects. Under NEPA, an EIS must be prepared for a project that has the potential for a “significant” effect on the environment.(CEs) apply to projects fitting within a category of activities previously determined not to have the potential for significant environmental impacts. When project effects are uncertain, an “environmental assessment” (EA) is prepared to determine whether the project may have a potentially significant impact on the human environment. An EA briefly provides evidence and analysis sufficient to determine whether to prepare an EIS or a “finding of no significant impact” (FONSI). A FONSI presents the reasons why the agency has concluded that no significant environmental impacts will occur if the project is implemented. “Categorical exclusions” A majority of SEPA states have adopted processes that provide for analyses that are generally comparable to the federal approach. Connecticut, for example, requires an EIS-type report if there are effects that “could have a major impact on the state’s land, water, air, historic structure and landmarks, existing housing, or other environmental resources, or could serve short term to the disadvantage of long term environmental goals.” CONN. GEN. STAT. § 22A-1C. Wisconsin’s law, like NEPA, requires an EIS for “major actions significantly affecting the quality of the human environment,” see WIS. STAT § 1.11(2)(c), and its implementing regulations identify four types of analysis and specify numerous types of transportation projects for which each type of analysis must be performed. Texas’s environmental review process for highway projects is by definition similar to NEPA because Texas regulations defer to FHWA’s procedures whenever there would otherwise be any See 43 TEXAS inconsistency between Texas’ and FHWA’s processes.ADMIN. CODE § 2.84(f). The Massachusetts SEPA treats any damage to the environment as significant, excluding only that which is found to be “insignificant.” MASS. GEN. LAWS, ch. 30, § 61. Finally, several SEPA states leave the decision whether to prepare an EIS and the extent of any EIS largely to the discretion of state project management officials. Virginia determines environmental effects using what it calls a “Preliminary Environmental Inventory,” a computer- generated summary of environmental features derived from state databases and submitted to resource agencies. The agency project manager receives this information and may or may not prepare an EIS. NEPA and its implementing regulations require consideration of the significance of a project’s direct, indirect, and cumulative effects. Direct effects are those “caused by the action and occur at the same time and place.” 40 C.F.R. § 1508.8(a). Indirect effects are the secondary consequences on local or regional social, economic or natural conditions or resources which could result from additional activities (such as associated investments and changed patterns of social and economic activities) induced or stimulated by the proposed action, both in the short- term and in the long-term. 40 C.F.R. § 1508.8(b). Cumulative impacts are the impacts on the human and physical environment which result from the incremental impact of the proposed action when added to other past, present or reasonably foreseeable future action. 40 C.F.R. § 1508.7. Positive as well as negative impacts, and long-term as well as short-term impacts, must be considered. For federal-aid highway projects, Title 23 and FHWA also require consideration of potential project impacts on certain types of pubic parklands and historic sites, see 23 U.S.C. § 138, 49 U.S.C. § 303 (so-called “section 4(f) requirements”). Like NEPA, several SEPA states and jurisdictions, such as Massachusetts (MASS. GEN. LAWS, ch. 30, §§ 61, 62A), Minnesota (MINN. R. 4410.1700, Subp 7), New York (N.Y. COMP. CODES R. & REGS., tit. 6, § 617.7) and Puerto Rico (P.R. REGS. JCL REG. 7948, Rule 109 DD), require broad consideration of a project’s impacts. Requirements or practices in a number of states differ from NEPA requirements, however. For example, about half of the SEPA states limit consideration of indirect or cumulative impacts, the presence of which may increase the need for an EIS (their presence alone does not require an EIS under NEPA). In Connecticut, for example, cumulative effects do not need to be considered if they are not caused by the lead agency or the proposed project. See CONN. AGENCIES REGS. § 22a-1a-3(b) (cumulative impacts result from the incremental impact of the action when added to other past, present or reasonably foreseeable future actions “to be undertaken by the sponsoring agency”). See also Ga. CODE ANN. § 12-16-8(3); 326 IND. ADMIN. CODE § 16-2.1-6(5); MONT. ADMIN. R.18.2.238(1). In addition, several states only consider environmental justice or economic impacts if they have a direct impact on physical conditions within the area affected by the project. For example, while the California statute requires consideration of “growth-inducing impacts,” the law also states that “evidence of social or economic impacts” can only be shown by evidence establishing “a physical impact on the environment.” Cal. Pub. Res. § 21082.2(c). Nearly half of SEPA states (the District of Columbia, Georgia, Hawaii, Indiana, Maryland, New Jersey, Virginia, and Wisconsin) have statutes, executive orders and regulations that include little or only general discussion of indirect or cumulative impacts. NEPA requires an EIS to contain a detailed statement regarding “alternatives to the proposed action.” 42 U.S.C. § 4332(2)(c)(iii). The agency must rigorously explore and objectively evaluate all “reasonable” alternatives to the proposed action, including a “no action” alternative, in response to a specified underlying purpose and need. 40 C.F.R. §§ 1502.13, 1502.14; 23 C.F.R. §§ 771.123(c), 771.125. See generally Biodiversity Conservation Alliance v. Jiron, 762 F. 3d 1036 (10th Cir. 2014). NEPA does not specifically require agencies to choose the most environmentally protective alternative, or indeed any specific alternative. FHWA policy provides that federal-aid highway decisions should be made “in the best overall public interest based upon a balanced consideration of the need for safe and efficient transportation; of the social, economic, and environmental impacts of the proposed transportation improvement; and of national, State, and local environmental protection goals.” 23 C.F.R. § 771.105(b). Most SEPAs also require the relevant agency (typically the state transportation agency) to analyze the environmental impacts of alternatives to the proposed project, in addition to impacts of the proposed project itself, and most require inclusion of a no-action alternative. Many SEPAs do not specify in detail how alternatives should be evaluated, although some states specify the types and characteristics of the alternatives that must be considered or not considered. For example, Minnesota regulations require the agency to consider alternative sites, technologies, and modified designs or layouts in preparing EISs. MINN. R. 4410.2300G. Many states, like NEPA, also require consideration only of reasonable or feasible alternatives. A few states favor selection of a particular alternative or prohibit selection of certain options. The California legislature, for example, has declared that “it is the policy of the state that public agencies should not approve projects as proposed if there are feasible alternatives . . . available which would substantially lessen the significant environmental effects of such projects.” CAL. PUB. RES. § 21002. The District of Columbia prohibits selection of an alternative that would substantially endanger public health, safety, or welfare, unless those effects can be avoided or mitigated. D. C. CODE § 8-109.04. Minnesota requires selection of any “feasible and prudent alternative consistent with the reasonable requirements of the public health, safety, and welfare.” 11D MINN. STAT. ANN. § 116D.04, subdiv. 6. And in Wisconsin, the agency must select the alternative that is in the best overall public interest, determined by a balanced consideration of several factors including the findings of the EIS and the need for a safe and efficient transportation system. WIS. ADMIN. CODE TRANS. § 400.06(3). NEPA and its regulations require agencies to consider mitigation of adverse environmental impacts in some circumstances, but do not specifically require agencies to carry out mitigation. Mitigation is defined to include: (a) avoiding the impact altogether by not taking a certain action or parts of an action; (b) minimizing impacts by limiting the degree or magnitude of the action and its implementation; (c) rectifying the impact by repairing, rehabilitating, or restoring the affected environment; (d) reducing or eliminating the impact over time by preservation and maintenance operations during the life of the action; and (e) compensating for the impact by replacing or providing substitute resources or environments. 42 U.S.C. § 4332(2)(C)(ii); 40 C.F.R. § 1508.20. FHWA requirements, by contrast, require reasonable mitigation measures to be taken, which are eligible for federal funding. 23 C.F.R. § 771.105(d). Likewise, many SEPAs require that a project’s environmental review documents identify mitigation measures that could lessen the environmental effects of a project. For example, in Wisconsin, a project’s Record of Decision (ROD) must indicate that all practicable means to avoid or mitigate environmental harm have been adopted or, if not adopted, include a statement explaining why. WIS. ADMIN. CODE TRANS. § 400.04(23). The ROD also must identify “mitigation measures selected” or the “reason for rejection of suggested reasonable mitigation measures.” Other state SEPA mitigation requirements vary. For example: The California legislature has declared that “it is it is the policy of the state that public agencies should not approve projects as proposed if there are . . . feasible mitigation measures available which would substantially lessen the significant environmental effects of such projects.” CAL. PUB. RES. CODE § 21002. See also CAL. PUB. RES. CODE § 21100(b)(5); CAL. CODE REGS., tit. 14, § 15041 (authority to mitigate); cf. CAL. CODE REGS., tit. 14, § 15004(b) (prohibiting actions that would adversely affect or limit the viability of mitigation measures). Massachusetts and New York have similar requirements. The District of Columbia does not authorize approval of a project that would have a significant environmental effect unless mitigation measures are available that would reasonably eliminate the adverse effects. In particular, if the EIS identifies an adverse effect that would substantially endanger the public, the District government must disapprove the action unless the applicant proposes mitigating measures to avoid the danger. D.C. CODE ANN. § 8-109.04. Some states link or combine identification of mitigation measures with identification of alternatives. Hawaii’s law, for example, requires mitigation to be considered as part of its alternatives analysis, but only if mitigation actions are proposed. HAW. REV. STAT. § 343-2 (EIS must include “measures proposed to minimize adverse effects”). Moreover, Hawaii officials told us that environmental mitigation actions, once considered, are not binding. Montana’s statute, by comparison, defines its alternative analysis to include mitigation, see MONT. CODE ANN. §§ 75-1-102(2), 75-1-220(1), and EAs can be used where the action is one that might normally require an EIS if the effects which might otherwise be deemed significant appear to be capable of mitigation by making design changes, imposing enforceable government controls or stipulations, or both. MONT. ADMIN. R. 18.2.237(4). To achieve efficiencies and to minimize duplication, CEQ’s and FHWA’s regulations require all federal agencies to collaborate with each other, and with state and local governments, to the fullest extent possible. Collaboration begins with consultation with other relevant federal and state agencies, Indian tribes, and the public; includes early identification of stakeholders, project scoping, and project planning; and extends through development of draft and final environmental impact documentation. The regulations reflect the federal government’s policy to encourage collaboration of all interested parties from the outset on projects that may require environmental impact analyses, including involvement of state agencies and other federal agencies. 23 U.S.C. § 139 codifies and expands the CEQ regulatory practices as statutory mandates for federal- aid highway projects, designating the Department of Transportation (DOT) as the federal lead agency and requiring the Secretary to administer the NEPA process, including optional establishment of a schedule for completion of the environmental review process. The Secretary is also encouraged by statute to facilitate use of programmatic approaches through which states may be authorized to resolve issues that would otherwise require federal action. About half of SEPA states have policies that specifically promote or require collaboration. For example: California generally requires collaboration among lead, responsible and trustee agencies assisted by the Governor’s Office of Planning and Research. CAL. PUB. RES. CODE §§ 21080.1, 21080.3, 21080.4. In this regard, the California legislature has recognized the importance of processes such as tiering to avoid duplicative analysis of environmental effects. CAL. PUB. RES. Code § 21093. Minnesota requires responsible governmental units to collaborate to the extent practicable to avoid duplication of effort between state and federal environmental reviews and between environmental reviews and environmental permitting. MINN. STAT. § 116D.04, Subp. 2a(g). Other states with formal cooperation policies include Connecticut (CONN. GEN. STAT. § 22a-1; see also CONN. GEN. STAT. § 22a-1a); the District of Columbia (D.C. CODE § 8-109.07); New York (N.Y. COMP. CODES R. & REGS., tit. 6, §§ 617.3(d), 617.6; N.Y. ENVTL. CONSERV. LAW § 8-0111.1); and Montana (through its rules, which include a section on cooperation with federal agencies, MONT. ADMIN. R. 18.2.250). The law in some states is unclear regarding how broadly state agencies are required to cooperate with federal agencies, including federal resource agencies. For example, although the Hawaii SEPA lists cooperation and coordination as important government objectives, see HAW. REV. STAT. § 343-1, the regulations refer specifically to the importance of cooperation and coordination between the state accepting authority or approving agency and other state authorities or agencies only in determining the applicability of requirements for supplemental environmental statements (see HAW. ADMIN. CODE § 11-200-27) and in avoiding duplication with NEPA requirements. The North Carolina regulations authorize but do not require state agencies to seek information from federal as well as local and special units of government. 1 N.C. ADMIN. CODE § 25.0210. Puerto Rico requires consultation with federal and state agencies prior to submitting the environmental document but does make the recommendations of federal agencies within their areas of jurisdiction binding. P.R. Reg. 7948, Rule 118 E. Finally, New York requires state cooperation with federal agencies. N.Y. ENVTL. CONSERV. LAW. § 8-0111.1. Washington establishes as policy that the Department of Ecology is to “utilize, to the fullest extent possible, the services, facilities, and information (including statistical information) of public and private agencies, organizations, and individuals, in order to avoid duplication of effort and expense, overlap, or conflict with similar activities authorized by law and performed by established agencies.” WASH. REV. CODE § 43.21C.110(2)(b). Wisconsin incorporates CEQ’s (but not FHWA’s) processes, (see WIS. ADMIN. CODE TRANS. § 400.06(b), App. citing 40 C.F.R. § 1500.5), while North Carolina authorizes but does not require that agencies seek information from federal as well as local and special units of government. (see 1 N.C. ADMIN. CODE § 25.0210(b)). Title 23 and CEQ regulations require the lead federal agency to coordinate the timing and scope of its reviews with cooperating agencies. 23 U.S.C. § 139(g); 40 C.F.R. § 1501.7(a)(6). Generally, for Title 23 funded projects, the lead federal agency is a modal administration within the Department of Transportation. See 23 U.S.C. § 139(c). See also 40 C.F.R. §§ 1501.1(b) (early and cooperative interagency consultation); 1501.2(d)(2) (requiring federal agency consultation with state, local, and tribal authorities and private persons and organizations); 1501.5 (lead agencies); 1501.6 (cooperating agencies). Some states’ SEPAs also provide for robust coordination. For example: California generally requires not only collaboration (as discussed in Element 7 above) but also requires coordination among lead, responsible and trustee agencies assisted by the Office of Planning and Research. See CAL. CODE REGS., tit. 14, § 15082(c). And when a proposed project is of sufficient statewide, regional, or area-wide environmental significance, California uses a clearinghouse process to facilitate and coordinate review of draft Environmental Impact Reports and other environmental documentation. CAL PUB. RES. CODE, §§ 21083(d) (review of draft EIRs, negative declarations, or mitigated negative declarations); CAL. PUB. RES. §§ 21082.1(c) (4), 21091. See also CAL. CODE REGS., tit. 14, §§ 15004(b) (timing), 15006 (reducing delay and paperwork), 15083 (consulting), 15063(c)(2) (mitigating effects to facilitate documented CE or negative declaration). Hawaii requires that “he [Office of Environmental Quality Control within the state Department of Health] and agencies shall cooperate with federal agencies to the fullest extent possible to reduce duplication between federal and state requirements.” Haw. Rev. Stat. § 343-5(h). In Massachusetts, the SEPA review process “is intended to involve any interested Agency or Person as well as the Proponent and each Participating Agency.” Code of Mass Regs., tit. 301, § 11.01(b). In Minnesota, to the extent practicable, responsible governmental units must avoid duplication and ensure coordination between state and federal environmental review and between environmental review and environmental permitting. Minn. Stat. § 116D.04, Subd. 2a(g). In other states, the laws require little or no formal attention to coordination in applying their SEPA requirements. For example: The Maryland MDOT regulations only require that the lead agency describe “the coordination and liaison relationship established in developing the proposal,” with the content of the description largely up to the agency. There is no clear requirement defining the responsibilities that the lead agency assumes. 11 CODE OF MD. REGS. § 01. 08.03(B)(9). See also 11 CODE OF MD. REGS. § 01.08.04(A)(3) (“The timing and type of community and public agency involvement in this analysis will be determined on a case-by-case basis . . ..”). Virginia provides specifically for coordination between VDOT and state resource agencies only. VA. CODE ANN. § 10.1-1191. Coordination is not mentioned in the New Jersey executive order, N.J. GOV. KEAN, EXEC. ORDER No. 215, or in guidance. Reflecting the requirements for federal coordination and collaboration discussed in Elements 7 and 8 above, federal environmental reviews of Title 23-funded highway projects also must include “completion of any environmental permit, approval, review, or study required for a project under any Federal law other than NEPA.” 23 U.S.C. § 139(a)(3)(B). This requirement reflects a policy that the NEPA process and the permitting processes should not be treated as separate and distinct processes but as one. Most SEPA states do not require or conduct single-process reviews as such. As discussed in Element 8 above, the Minnesota statute requires that the responsible governmental unit shall, to the extent practicable, avoid duplication and ensure coordination between state and federal environmental review and between environmental review and environmental permitting. MINN. STAT. § 116D.04, Subp. 2a(g). North Carolina and Washington require a list of all licenses that the project is known to require, see 1 N.C. ADMIN. CODE § 25.0603(2), WASH. ADMIN. CODE § 197-11-440(2)(d), and Washington has made significant efforts to integrate its Growth Management Act and Model Toxics Control Act processes with its SEPA processes.issuing construction permits is centralized in a Permit Management In Puerto Rico the responsibility for Office. See P.R. LAWS ANN., tit.12, § 8001a(c). This office assesses compliance with Puerto Rico’s Environmental Public Policy Act. See id. Finally, as noted in Element 3 above, Texas regulations defer to FHWA’s procedures whenever there would otherwise be any inconsistency between Texas’ and FHWA’s processes. To avoid duplication that might otherwise result, a number of SEPAs authorize or encourage preparation of documentation that meets both federal and state NEPA and SEPA requirements, or use of information, documentation, or analyses developed for the NEPA review. The SEPA procedures vary from state to state, ranging from provisions allowing use of some or all of the paperwork prepared to meet federal requirements to full adoption of the federal process and results so that no separate state funds are required. These issues arise in the context of three basic scenarios: About half of SEPA states are authorized to forgo the SEPA process and determination entirely if the proposed project is covered by a completed NEPA review. For example, in Georgia, an agency is deemed to have complied with the requirements of the SEPA if the proposed government action requires and has received federal approval of an environmental document prepared in accordance with NEPA. See GA. CODE ANN. § 12-16-7. In Indiana, if any state agency is required by NEPA to file a federal EIS, it is not also required to file an EIS with the state government. See IND. CODE § 13-12-4-10. Some SEPA states are authorized to use NEPA documentation to meet their SEPA requirements but state officials must make some kind of independent decision under state law. For example, in Minnesota, if a federal EIS will be or has been prepared for a project, the state may use the draft or final federal EIS as the draft state EIS if the federal EIS addresses the scoped issues and satisfies the state content standards for an EIS. See Minn. R. 4410.3900, Subp. 3; Minn. R. 4410.2300. In Montana, implementation of NEPA and the Montana SEPA are separate and distinct federal and state functions, but state agencies are required to coordinate with other state and federal agencies in the preparation of a single environmental review that is legally sufficient for both NEPA and MEPA. MONT. ADMIN. R. 18.2.250(c). Some SEPA states allow preparation of a single set of documentation meeting both the NEPA and additional SEPA requirements.state must only prepare separate findings—in effect, a separate Record of Decision or similar documentation. For example, in California, the SEPA and regulations mandate use of NEPA EISs and other documentation in lieu of state Environmental Impact Reports and meeting other state requirements whenever possible. See CAL. PUB. RES. §§ 21083.5-21083.7; CAL. CODE REGS., tit. 14, §§ 15063, 15082, 15110, 15127, 15220-15528. The state SEPA does not, however, dispense with the need to meet its state-specific requirements. In New York, the SEPA requires cooperation between state and federal agencies in creating an environmental review and exempts a project from additional state review if a federal NEPA review is conducted. See N.Y. ENVTL. CONSERV. LAW § 8-0111(1)-(2); N.Y. COMP. CODES R. & REGS., tit 17, § 15.10. If the proposed action is subject to NEPA, the statute is interpreted to require that NYDOT must comply with the federal requirements which then excuses further statutory obligations. See N.Y. COMP. CODES R. & REGS. tit. 17, § 15.6; see also N.Y. ENVTL. CONSERV. LAW § 8-0111(1)-(2) (single combined document prepared along with state and federal report.). In Washington, state documentation is not required if federal documentation already has been prepared for the same project, see WASH. REV. CODE § 43.21C.150, but the statute does not waive the requirement for a state decision concerning the adequacy of any prior NEPA review, even if that decision is based on NEPA documentation. Id.; see also WASH. ADMIN. CODE §§ 197-11-600, 197-11-630. The federal NEPA process requires the opportunity for robust public participation. At the least, the public must be notified of the proposed project and given an opportunity to comment on it. See, e.g., 40 C.F.R. §§ 1501.4(b), (e) (EAs); 1502.19 (EISs); 1506.6 (NEPA implementation generally). There are also specific public participation requirements in Title 23 for federal-aid highway projects. See, e.g., 23 U.S.C. §§ 128, 139; 23 C.F.R. §§ 771.111(h)(1)-(2), 771.113(a)(2). Moreover, 23 U.S.C. § 128 requires that an opportunity for a public hearing be provided to consider the impact of each federal aid-highway project on the environment. Similarly, almost all SEPAs or their implementing regulations provide for some degree of public participation. Three SEPA states do not require agencies to consider public input at all: Indiana (only authorized comments must be considered, 327 IND. ADMIN. Code §§ 11-1-4, 11-3-3), New Jersey (N.J. GOV. KEAN, EXEC. ORDER No. 215, and guidance), and Virginia (generally, comments are invited from interested agencies, planning district commissions and localities (VA Dept. of Env. Qual., Procedure Manual, July 2013)). The opportunities for public participation in other states vary. For example: In New York, public notice must be provided for all determinations that a project will have no significant effect; that a project may have a significant effect; that a draft or final EIS has been completed; and any subsequent notice of a negative declaration. N.Y. COMP. CODES R. & REGS. tit. 17, § 15.10. Public notice of hearings also must be given, although this may be combined with a notice of completion of a draft EIS, and public comments must be permitted for draft and final EISs. N.Y. ENVTL. CONSERV. LAW § 8-0109 (4); N.Y. COMP. CODES R. & REGS., tit. 17, §§ 15.6(d), 15.10(d). The regulations also provide for consideration of public comments on scoping and other matters. N.Y. COMP. CODES R. & REGS., tit. 6, §§ 617.8(e), 617.7. Washington State provides many opportunities for public participation and comment. See, e.g., WASH. REV. CODE § 43.21C.110(1); WASH. ADMIN. CODE §§ 197-11-340(2)(c), 197-11-350(5), 197-11-355,197- 11-400(4), 197-11-405, 197-11-408, 197-11-455(6)-(8), 197-11-502, 197-11-560. See also WASH. ADMIN. CODE § 197-11-410(1)(d). In Wisconsin, it is WisDOT’s policy that “public involvement, interagency coordination and consultation, and a systematic interdisciplinary approach to analysis of the issues shall be essential parts of the environmental process for proposed actions.” WIS. ADMIN. CODE TRANS. § 400.06(4). As part of the scoping process, the Wisconsin regulations “establish a schedule for document preparation and for opportunities for public involvement.” WIS. ADMIN. CODE TRANS. § 400.09(4)(c). Public comment must be allowed on EISs and EAs, see WIS. ADMIN. CODE TRANS. § 400.11(3)-(5), but not on FONSIs and Environmental Reports. See WiS. ADMIN. CODE TRANS. § 400.11(6)-(7). In California, the importance of public participation in the SEPA process is specifically recognized in the regulations. CAL. CODE REGS, tit. 14, § 15002(a)(1), (4). Key environmental review documents are classified as public documents. Id., § 15002(f), (j) (“Under [the California law], an agency must solicit and respond to comments from the public and from other agencies concerned with the project.”); see also id. § 15022(a)(5) (duty of California public agencies to consult with the public regarding environmental effects). A number of SEPAs limit public participation to commenting on specific aspects of the process. For example, the Massachusetts statute and regulations provide notice-and-comment procedures at critical points in the process. See MASS. GEN. LAWS, ch. 30, § 62C; CODE OF MASS REGS., tit. 301, § 11.15. Several jurisdictions, such as Georgia, Connecticut, and the District of Columbia, authorize public hearings but base the decision to hold them on the number of requests received. See GA. CODE ANN. § 12-16-5; CONN. GEN. STAT. § 22a-1d; D.C. CODE § 8-109.03 (“If 25 registered voters in an affected single member district request a public hearing on an EIS or supplemental EIS or there is significant public interest, the Mayor, board, commission, or authority shall conduct a public hearing.”). Montana provides for notice of EIS scoping to “affected federal, state, and local government agencies, Indian tribes, the applicant, if any, and interested persons or groups,” but not the general public. MONT. ADMIN. R. 18.2.241(2)(a). The opportunity to obtain meaningful review of agency action by a court is an important protection against arbitrary, capricious or otherwise unlawful agency decision making. At the federal level, judicial review of NEPA decisions issued by lead agencies and other involved federal agencies occurs under the federal Administrative Procedure Act, 5 U.S.C. §§ 701- Not everyone who is dissatisfied with a NEPA decision may 706 (APA).challenge it in court; only those who suffer specific and sufficient injury as a result of the decision, and thus have “standing,” may file suit. For federal-aid highway projects, a lawsuit generally must be filed within 150 days after publication of a notice in the Federal Register announcing a final approval, permit, or license. See 23 U.S.C. § 139(l)(1). Similarly, while some of the 18 states with SEPAs required for highway projects provide for court review of decisions in their SEPA legislation, most do so either in their general administrative agency procedure legislation (typically based on the Model State Administrative Procedure Act) or in specialized legislation. Most states also have followed federal law by limiting who may challenge the state agency decision, how the agency’s action will be reviewed, and what the scope of that review will be. In particular, as applied to state and federal-aid highway projects, SEPA laws based on the Model Act challenge a state agency decision: establish five key prerequisites to The challenger must suffer particularized harm. For example, individuals owning property that might be acquired or adjacent to the project area, or organizations representing such persons (including environmental groups), likely could bring suit. The agency decision must be final, that is, the challenger generally must have exhausted any dispute resolution process available at the agency. The court challenge generally must be based only on the evidence presented to the lead agency and the issues already raised to the agency (the administrative record). The challenge generally must allege that the agency’s decision was legally arbitrary and capricious, contrary to state or federal law, or not supported by the evidence before the agency. The remedy being requested generally must be limited to an order directing the agency to take a certain action, rather than seeking monetary damages. Model State Admin. P. Act (1961), § 15; Model State Admin. P. Act (1981), §§ 5-101 to 5-205. e.g., IND. CODE § 4-21.5-5-5; IND. STAT § 116D.04, Subd. 10; N.C. GEN. STAT. § 150B-45. Washington provides only 21 days. WASH. REV. CODE § 43.21C.080(2)(a). The Moving Ahead for Progress in the 21st Century Act (MAP-21) contains a mandate requiring that GAO review state laws and procedures for conducting environmental reviews with regard to projects funded under title 23 of the United States Code (primarily federal-aid highway projects). This report addresses: (1) the factors that determine whether federal or state environmental reviews are required for highway projects, and how the types of federal and state environmental review documents compare; (2) how state environmental review requirements and practices compare with federal requirements for assessing federal-aid highway projects; and (3) the extent of any duplication in federal and state reviews, including frequency and cost, in states with environmental review requirements for highway projects. We identified 18 states with state environmental policy acts (SEPA) required for highway projects for inclusion in our review (see table 1). In these states, statutes or regulations require some assessment of potential environmental effects from highway projects that may mirror requirements under the National Environmental Policy Act (NEPA) highway projects. The list of states with SEPAs derives largely from the 18 states identified by the Council on Environmental Quality (CEQ) as having SEPAs, including New Jersey, which has an executive order that requires environmental reviews. In addition, during the course of our work, we learned that Texas does not have a general state-level SEPA but does have a state statute and regulations that apply to transportation projects, and we confirmed with CEQ officials that we should include Texas in our scope. By contrast, we excluded South Dakota because its SEPA provides the option of preparing an environmental impact statement, but does not require one, and South Dakota Department of Transportation officials told us that they do not conduct environmental reviews under the state law. Pub. L. No. 91-190 (1970), codified at 42 U.S.C. §§ 4321-4347. For each of our objectives, we reviewed relevant publications, including our prior reports on NEPA and highway projects.documents and analysis from federal agencies related to NEPA reviews for federal-aid highway projects, including CEQ, the Congressional Research Service (CRS), and the Federal Highway Administration (FHWA), including FHWA’s Environmental Review Toolkit for NEPA and Transportation Decisionmaking (FHWA’s NEPA Toolkit), which provides guidance on FHWA’s NEPA environmental review process for state department of transportation (state DOT) officials. In addition, we interviewed officials with FHWA, CEQ, and CRS. We also interviewed two academics who authored a treatise on environmental review requirements and were cited by CEQ as having expertise on NEPA and SEPAs—Professor Daniel Mandelker at Washington University and We obtained Arianne Aughey—and representatives from two professional associations with expertise in federal or state environmental review requirements or state practices—the American Association of State Highway and Transportation Officials (AASHTO) and the National Association of Counties. To respond to the first two objectives, we conducted a legal analysis and a survey, which included all 18 states that we identified as having SEPAs required for highway projects. Our legal analysis compared key elements of SEPAs and related state regulations with key elements of NEPA and FHWA regulations for federal-aid highway projects. Our comparison of NEPA and FHWA regulations with requirements in state statutes, regulations, and executive orders focused on the key NEPA statutory and regulatory requirements and did not systematically examine court decisions or legislative history. We identified the key NEPA elements by reviewing relevant federal statutes and regulations in consultation with CEQ and FHWA. Specifically, we started with the statutory language of NEPA, which requires agencies to prepare, for major federal actions significantly affecting the quality of the human environment, a detailed statement on, among other things: (1) the environmental impact of the proposed action, (2) any adverse environmental effects which cannot be avoided should the proposal be implemented, and (3) alternatives to the NEPA, CEQ regulations, and—for federal-aid highway proposed action.projects—FHWA regulations then specify detailed environmental review processes, which include requirements for interagency coordination, avoiding duplication, and participation by state and local governments as well as the general public, among other things. In addition, the regulations provide for more efficient methods of environmental review and a process for determining when use of these methods is appropriate. Based on these statutory and regulatory requirements, and in consultation with CEQ and FHWA, we distilled 12 key elements of environmental review. 1. Policy and purpose of the environmental review requirements 2. Types of projects covered 3. Level of detail (“depth”) of environmental impact evaluation 4. Level of significance (“breadth”) addressed in environmental impact evaluation 5. Consideration of alternatives 6. Consideration of mitigation 7. Requirement for collaboration to enhance efficiency and avoid 8. Requirement for agency coordination 9. Requirement for single-process review 10. State adoption of federal NEPA reviews 11. Opportunity for public participation 12. Opportunity for judicial review of agency decisions In additional to this legal analysis, we conducted a survey of state DOTs in all the 18 states with SEPAs required for highway projects to compare in more detail state environmental review requirements with FHWA’s NEPA requirements for reviews of federal-aid highway projects. Using FHWA’s NEPA Toolkit as a guide, we developed survey questions to gather information comparing states’ environmental review requirements and practices with FHWA’s NEPA requirements. These requirements align with five of FHWA’s six principles of the NEPA process and reflect key elements in our legal analyses.consideration of the social, economic, and environmental impacts of a proposed action or project; development and evaluation of a range of reasonable alternatives to the proposed project, based on the applicants defined purpose and need for the project; mitigation of project effect by means of avoidance, minimization and public involvement including opportunities to participate and comment. compensation; interagency coordination and consultation; and For each of these principles, we developed questions to assess the extent to which state requirements were less stringent than, similar to, or more stringent than FHWA’s NEPA requirements.survey, we conducted pretests with state DOT officials in Maryland, Washington, and North Carolina to ensure that respondents interpreted our questions in the way we intended. That is, we verified that the questions were clear and unambiguous and that we used appropriate terminology in the survey, to ensure that respondents had the necessary information and ability to respond to the questions. Where necessary, we revised the screening tool to improve the survey instrument in response to feedback from the pretests and internal GAO review. We divided the final screening-tool questions into four parts: Part I: Contact Information; Part II: Documentation; Part III: Duplication and Coordination; and Part IV: Environmental Review Requirements. We administered the survey by emailing an electronic form to state DOT officials in all 18 states with SEPAs required for highway projects. Two states provided clarifications or supplementary information along with their survey responses. To improve the accuracy and completeness of the data, we used the clarifying information provided by agency officials to update responses where necessary. Because this effort was not a sample survey, it has no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question or sources of information that are unavailable to respondents can introduce unwanted variability into the survey results. We took steps to minimize such nonsampling errors in developing the survey tool— including using a social science survey specialist to help design and pretest the survey. We also minimized the nonsampling errors when analyzing the data, including using an independent analyst to review all computer programming related to the survey. Finally, there were a few instances where state DOTs should have indicated one response to a question and instead provided two. In these cases we followed up with the state DOT officials to clarify their response. To identify which states had requirements that were “generally similar” to FHWA’s NEPA requirements overall, we determined which states in our survey reported having environmental review requirements that we found to be similar or somewhat similar to 42 individual requirements for FHWA NEPA reviews. We characterized state survey responses as being similar overall if 75 percent or more of the questions about individual state requirements under a NEPA principle were marked as “similar” or “more stringent” than FHWA’s NEPA requirements. If 50 to 74 percent of the requirements were marked as “similar” or “more stringent,” we characterized state survey responses as somewhat similar. If 51 to 74 percent of the requirements were marked as “less stringent” or “not applicable,” we characterized state survey responses as somewhat less stringent. If 75 percent or more of the requirements were marked as “less stringent” or “not applicable,” we characterized state survey responses as less stringent. Then we determined that states where officials indicated that at least half of their responses for each requirement within the principle were similar to or more stringent than FHWA’s NEPA requirements were “generally similar” to federal requirements. For example, regarding the consideration of impacts, we determined that state requirements were “generally similar” overall if the state DOT officials reported that 3 of the 5 individual requirements were at least similar (if not identical) to FHWA’s NEPA requirements: (1) identification of impacts, as well as assessment of (2) cumulative effects, (3) context, (4) cultural or historical impacts, and (5) social or environmental justice. In addition to these analyses, we interviewed state DOT officials and officials with state natural resource agencies in 9 of the 18 states with SEPAs. (See table 2.) We selected SEPA states for additional interviews and site visits based on four criteria: robustness (or lack thereof), number of active EIS reviews, “uniqueness,” and, in some cases, proximity to GAO offices. Our findings for these 9 states are not generalizable to the other 9 states with SEPAs but provide examples of varying state requirements and practices. To respond to the third objective, we included questions about duplication in our survey of state DOTs, as well as interviewing state officials and FHWA officials at the division offices in those states we selected for additional interviews. We have defined duplication as occurring when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries, in accordance with GAO’s body of work on duplication in the federal government. This report focuses on duplication that might occur between state and federal processes for environmental review of highway projects where there is duplication of effort. In the context of environmental review requirements, such duplication could occur if states were required to carry out two separate—but similar—analyses to satisfy federal and state requirements, for example, but not if the same analysis could be used to satisfy both state and federal documentary (i.e., procedural) requirements. In our interviews with state and FHWA officials, we inquired about duplication within and among highway projects, as well as duplication that may occur across time within a project. We also asked FHWA and state officials about the cost of any potential duplication (and how such cost might be measured) and the frequency of any potential duplication. Finally, we asked state officials about efforts to make the environmental review process more efficient in our survey, as well as the potential benefits of any such efforts. We conducted this performance audit from May 2013 to November 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individuals named above, Susan Zimmerman (Assistant Director), Richard Calhoon, Heather Halliwell, Bert Japikse, Delwen Jones, Molly Laster, Hannah Laufe, Gerald B. Leverich, III, Jaclyn Nelson, Joshua Ormond, Richard P. Johnson, and Elizabeth Wood made key contributions to this report.
Under NEPA, federal agencies evaluate the potential environmental impacts of proposed projects. FHWA has developed a process for NEPA reviews for federal-aid highway projects, such as roads or bridges. According to the Council on Environmental Quality (CEQ) and GAO analysis, 18 states have SEPAs that also require the review of environmental impacts of a variety of actions for highway projects. The Moving Ahead for Progress in the 21st Century Act (MAP-21) required GAO to examine state environmental reviews for highway projects, including whether they duplicate federal environmental reviews for federal-aid highway projects. This report focuses solely on environmental reviews of highway projects in states with SEPAs and addresses 1) factors determining whether federal or state environmental reviews are required; 2) how state and federal review requirements compare; and 3) the extent of any duplication in federal and state reviews, including frequency and cost. GAO reviewed FHWA and CEQ documents and interviewed officials of these federal agencies; analyzed state laws and regulations; surveyed the 18 states with SEPAs required for highway projects; and interviewed selected state agencies within 9 of those states based on the number of FHWA NEPA reviews underway and other factors. This report has no recommendations. The U.S. Department of Transportation and CEQ provided technical corrections about federal and state environmental review requirements, which GAO incorporated as appropriate. Three factors—project funding sources and project characteristics, and whether a state allows the adoption of federal review documents—generally determine whether a highway project needs a federal environmental review under the National Environmental Policy Act (NEPA) or a state environmental review under state law, or both. Projects without federal highway funding usually do not require a Federal Highway Administration (FHWA) NEPA review, but NEPA reviews of highway projects may still be required to obtain federal permits. Thresholds for environmental review vary under state environmental policy acts (SEPA) and may include project cost or length, whereas NEPA focuses on the potential for significant environmental impacts. Eighteen states have SEPAs required for highway projects, and 17 of these allow for the partial or full adoption of FHWA analyses or documentation to meet state environmental review requirements, according to GAO's survey of these states. State environmental review requirements are generally similar to the FHWA NEPA process—including consideration of impacts, development and evaluation of project alternatives, mitigation of adverse project impacts, interagency coordination, and public involvement—although differences in specific requirements may affect key environmental decisions. For example, for the consideration of environmental impacts of a proposed highway project, a majority of states responding to GAO's survey indicated that their requirements are similar to FHWA's NEPA requirements overall. However, officials in 7 states GAO surveyed reported that their SEPA requirements related to social and environmental justice impacts are less stringent than FHWA's NEPA requirements. In addition, while state public involvement requirements are generally similar to FHWA's NEPA requirements overall, individual requirements vary, ranging from states that have no requirements to allow public involvement to others that may have more stringent requirements than FHWA's. Officials in 3 states told GAO that in practice they match FHWA's NEPA public involvement requirements for state-only reviews to meet public expectations, even if state law requires less. Further, in the absence of required federal NEPA reviews, certain federal laws related to protection of parklands and historic preservation may not apply to a project, potentially affecting whether a project is determined to have significant impacts and whether those impacts are mitigated. Officials in 4 of the 18 states in GAO's survey identified instances of potential federal–state duplication in environmental review processes, stemming either from supplemental state requirements or from the lack of alignment between required federal and state review documents. By contrast, 10 of the states in GAO's survey reported that there was no duplication in environmental reviews. Generally, state officials explained that little duplication of effort occurs in state and federal review processes because these reviews are done concurrently by state officials able to address requirements with analyses used for different purposes without replicating effort. Further, 7 of the 10 states reporting no duplication allow for the adoption of a NEPA review to fulfill SEPA requirements. Finally, 4 states pointed to potential duplication or overlap that did not stem from the interaction of state and federal requirements, such as the rework necessary to keep environmental reviews up to date.
You are an expert at summarizing long articles. Proceed to summarize the following text: In December 1994, the Secretary of Housing and Urban Development announced a plan to reinvent the Department to transition it from a “lumbering bureaucracy to a streamlined partner with state and local governments.” With the streamlining, the Secretary expects HUD to reduce its staffing from about 11,900 to 7,500 by the year 2000. In March 1995, the Secretary laid out the envisioned changes for HUD in a plan entitled, HUD Reinvention: From Blueprint To Action. The plan was subsequently updated in January 1996. The HUD reinvention plan acknowledges that FHA is behind the times technologically and increasingly ill-equipped to manage its business. The plan notes that FHA needs to streamline operations and acquire state-of-the-art technology and information systems to transform itself into a results-oriented, financially accountable operation. One of the mandates of the plan is to reduce FHA staffing from about 6,000 to 2,500. As part of the downsizing, FHA’s Office of Single Family Housing is planning to reduce its staff from a 1994 level of 2,700 to 1,150 by the year 2000. The mission of FHA’s Office of Single Family Housing is to expand and maintain affordable home ownership opportunities for those who are unserved or underserved by the private market. Single family housing carries out its mission by insuring private lenders against losses on single family home loans. FHA’s insurance operations target borrowers such as first-time home buyers, low-income and moderate-income buyers with little cash for down payments, residents of inner cities and rural areas with inadequate access to credit, minority and immigrant borrowers, and middle-income families in high cost areas. At the end of fiscal year 1995, FHA had insurance outstanding valued at about $350 billion on mortgages for 6.5 million single family homes. FHA processed an average of about 1 million applications for mortgage insurance and disposed of properties acquired from borrower defaults on over 50,000 loans annually during fiscal years 1994 and 1995. Single family housing operations consist primarily of four functions: loan processing, quality assurance, loss mitigation and loan servicing, and real property maintenance and disposition. The following summarizes these basic functions. Loan processing: FHA records data on loans originated by FHA-approved lenders, issues insurance certificates, and conducts underwriting reviews of loan documentation. FHA-approved lenders perform the underwritingtasks necessary to determine whether loans meet FHA’s insurance guidelines. Quality assurance: FHA reviews selected loans to ensure that approved lenders are originating loans in accordance with FHA’s guidelines. Loss mitigation and loan servicing: FHA attempts to resolve delinquencies to minimize losses that can result if borrowers default on loans. FHA’s loss mitigation efforts have generally involved (1) placing delinquent loans in the mortgage assignment program, which offered reduced or suspended payments for up to 3 years to allow borrowers to recover from temporary hardships, (2) offering alternative default resolution actions such as refinancing, or (3) using preforeclosure sales of homes. FHA services loans that are in the mortgage assignment program, which includes collecting monthly payments, paying property taxes, and maintaining accounting records. Property maintenance and disposition: FHA acquires properties from voluntary conveyances by borrowers or foreclosures. FHA inspects and secures the properties, performs necessary repairs, and sells the properties. The functions performed by FHA generally parallel those performed by other organizations in the single family mortgage industry, such as Fannie Mae, Freddie Mac, and large private mortgage insurance corporations. However, the functions FHA performs differ from those of the other organizations because of differing business objectives. Fannie Mae and Freddie Mac are government-sponsored, privately owned enterprises that purchase mortgages from lenders and (1) hold them as investments in their portfolios or (2) sell securities that are backed by mortgage pools. Therefore, in addition to the functions FHA performs, Fannie Mae and Freddie Mac also establish purchase prices for mortgages, negotiate purchase contracts, and market mortgage-backed securities. Private mortgage insurers perform the same functions as FHA and perform loan underwriting for a significant portion of the loans they insure. Similar to FHA, private mortgage insurers also accept loans underwritten by lenders to whom they have delegated the authority to initiate insurance. FHA performs some functions that are unique in the mortgage industry. For example, FHA sells houses from its real property inventory to interested nonprofit organizations, states, and local governments, and FHA works with local community development officials in their efforts to increase home ownership opportunities. Because of its mission, FHA accepts higher levels of risk on many of the mortgages it insures. FHA covers 100 percent of losses on the mortgages it insures, whereas Fannie Mae and Freddie Mac share losses with mortgage insurers and private insurers share losses with mortgage lenders. FHA also insures higher risk mortgages because it accepts higher loan-to-value and borrower debt-to-income ratios than the private mortgage insurers. In addition, FHA has a higher proportional volume of defaults that it must manage and a higher volume of real property maintenance and disposition activities. Historically, FHA’s single family housing operations have had significant management control problems in originating insured loans, resolving delinquencies, managing assigned mortgages, and managing property maintenance and disposition activities. Information system weaknesses have been cited as a contributing factor for many of FHA’s management control weaknesses. For example, independent audit reports have cited FHA systems that collect delinquency data and track default resolution actions as inadequate to support oversight responsibilities and as factors contributing to inadequate loss mitigation efforts. Similarly, FHA’s information systems have not adequately supported the tracking and monitoring of collection and foreclosure actions on loans in the mortgage assignment program. In addition, the lack of information system support for controlling and accounting for properties assigned to real estate brokers for property disposition was cited as a major cause of the highly publicized HUD scandals in the 1980s. According to HUD’s Federal Managers’ Financial Integrity Act (FMFIA) compliance reports and independent auditors’ reports for fiscal years 1994 and 1995, FHA has corrected system weaknesses in the mortgage assignment and property disposition areas but is still developing systems to support delinquency monitoring and resolution. FHA plans to use its existing information technology capabilities to facilitate some streamlining and staff reduction initiatives, while other initiatives will require new information technology applications. FHA plans to achieve the majority of the single family housing staff reductions by reducing its field staff performing loan processing from about 600 to 310, loss mitigation and loan servicing from 600 to 90, and real property maintenance and disposition from 750 to 75. FHA also plans to reduce its single family housing headquarters staff from about 200 to 85. Some of these reductions will be offset by increases in field staff performing quality assurance, marketing and outreach, legal, and administrative support functions. While the staff levels for each function are not final, single family housing officials expect to reach the projected 1,150 staffing target. The planned reduction of loan processing staff is to be facilitated by expanding the use of existing electronic data transfer capabilities to enable the reduction of data entry by FHA staff and the consolidation of operations into fewer locations. New information system support will be needed for FHA’s planned changes to loss mitigation and disposition operations. To reduce its loan processing staff, FHA is (1) expanding the use of its electronic data transfer capabilities so that fewer staff are needed to enter data into systems from paper documents and (2) using its information systems to support the consolidation of operations from 81 offices to 5 offices. FHA established its electronic data transfer capabilities for loan processing and made them available to lenders in 1991. In fiscal year 1995, lenders submitted about 35 percent of loan data electronically. To take further advantage of this capability, FHA plans to ask lenders to increase the use of electronic transfers to deliver loan data. In 1994, FHA began consolidating loan processing operations into fewer offices to increase efficiency. According to officials responsible for single family loan processing operations, variations in workloads have resulted in idle time for loan processing staff at some field offices, while staff at other field offices have been overloaded and processing has been backlogged. Consolidating the work to fewer locations helps eliminate the variations in workload and increase efficiency of operations, thus reducing the number of staff needed to perform the work. FHA is consolidating into its Denver office the loan processing workload that had been performed in 17 field offices. The Denver office is using 42 staff for the loan processing work that was performed by an estimated 96 staff before the consolidation. FHA officials in charge of the pilot attributed the increased efficiency to consolidating the work at one site and increasing the use of electronic data transfer to submit loan data to FHA. According to Denver project officials and documentation, FHA persuaded lenders to increase their use of electronic data transfer from less than 40 percent of all submissions before the consolidation to about 90 percent after consolidation. They also said that loans submitted electronically can be processed in about one-third the time it takes to process loans submitted in paper form. When lenders electronically transfer the loan data, the loan processing staff only need to check that data against the paper forms submitted by the lender. If the data are not submitted electronically, the staff have to enter the loan data from the paper forms into FHA’s information systems. With the initial consolidation and processing changes, the Denver office loan processing times were reduced from 5 to 8 days to an average of 2 days, according to FHA. At the time of our review, the Denver consolidation was substantially complete. In April 1996, FHA announced that it would start consolidating loan processing operations in 32 field offices in eastern states into 2 offices—Philadelphia and Atlanta. FHA plans to complete these consolidations in 1997 and start consolidating the remaining offices in 1998. For loss mitigation, FHA plans to phase out its staff-intensive mortgage assignment program—which is expected to reduce loan servicing staff from about 600 to 40 staff—and implement a new default resolution program that is to be performed by 50 staff. The new processes are to be supported by a new system that will employ electronic data transfers for lender reporting of actions to resolve mortgage payment delinquencies. FHA’s mortgage assignment program was cited in independent auditors’ reports on FHA financial statements and HUD’s FMFIA compliance reports for fiscal years 1994 and 1995 as a material management control weakness because of extensive losses from uncollected payments. Under the mortgage assignment program, FHA (1) pays the lender for defaulted loans, (2) offers the borrowers reduced or suspended payments for up to 3 years to help them overcome temporary hardships, and (3) services the loans while they are in this program. In October 1995, we reported that while the program helps borrowers avoid immediate foreclosure, in the longer term about 52 percent of the borrowers eventually lose their homes through foreclosure. We also reported that FHA’s losses will total about $1.5 billion more than they would have in the absence of the program. As part of HUD’s fiscal year 1996 appropriations act, the Congress included a provision directing FHA to stop accepting delinquent loans into the mortgage assignment program and providing FHA with increased flexibility to use loss mitigation alternatives. In addition to not accepting loans into the program, FHA is selling mortgages from the program portfolio to reduce the workload associated with servicing them. FHA’s current processes and system have also been labor intensive because lenders report delinquency data on paper documents that require manual handling and data entry, and the automated system is capable of producing only reports that list data for each lender and does not summarize data concerning the timeliness of actions, alternatives selected, and results of resolution actions. To improve efficiency, FHA modified its system to accommodate electronic data transfers of the delinquency data from lenders and issued instructions that require all lenders to submit delinquency reports electronically by the end of 1997. FHA also plans to develop a new system to track and analyze lenders’ use of available loss mitigation alternatives to resolve mortgage delinquencies. FHA is considering using one or more of three alternatives to replace the current property maintenance and disposition operations and reduce staff. These alternative approaches include (1) using contractors to maintain and dispose of properties, (2) forming and using joint ventures with other organizations (which is similar to using contractors, but the partner will have an investment in the venture) to maintain and dispose of properties, and (3) selling the defaulted mortgages rather than acquiring the properties. According to FHA officials responsible for property maintenance and disposition, they will need new information technology support to track and manage the new operations regardless of the choice made. FHA is testing the use of contractors to perform property maintenance and disposition activities for three field offices and has contracted for feasibility studies of the other two alternatives. FHA plans to complete its analyses of the studies in mid-1997 and decide which of the alternative approaches it will use. FHA’s planned information technology initiatives are similar to those undertaken by other mortgage industry organizations to increase productivity. Additional efficiency and effectiveness improvements may be possible if FHA incorporates other information systems capabilities used by the organizations. The mortgage industry organizations we visited have been using electronic data transfer extensively to eliminate or reduce the manual processes associated with the receipt and processing of data from paper documents. For example, Fannie Mae and Freddie Mac have had lenders submit loan data electronically for more than 2 years. These organizations have also consolidated their loan processing, loss mitigation, and property disposition operations to increase efficiency and improve consistency of operations and management controls. As a result of the shift to electronic data transfer and consolidation of operations, officials of these organizations stated that they achieved productivity improvements ranging up to 250 percent for the loan processing function. FHA may be able to achieve greater efficiency and effectiveness if it adopts the automated capabilities that are used by the other mortgage industry organizations. These capabilities include (1) the ability to electronically analyze loan data to ensure that loans meet their underwriting guidelines and (2) the use of computer models to automatically focus quality assurance activities on areas with the most vulnerability, select the most promising default prevention alternatives for delinquent loans, and analyze repair and marketing data to identify options that will minimize losses and provide the greatest returns on property repair and disposition activities. In addition, officials of the mortgage industry organizations we visited told us that they achieved further staff efficiencies through extensive use of graphical user interfaces, integration with other systems, and telecommunications to facilitate data acquisition and correspondence. According to information provided by Freddie Mac and Fannie Mae officials, these organizations are able to process similar loan volumes with about 20 percent of the staff planned for FHA loan processing operations because (1) all essential data for delegated loan underwriting are submitted electronically rather than in paper form, (2) their systems electronically perform all edit checks and comparisons against underwriting criteria, and (3) their systems use mortgage scoring models to automatically identify loans with the greatest risk of default for underwriting and other quality assurance purposes. Conversely, FHA requires lenders to submit paper files that staff use to check data submitted electronically, enter data not submitted electronically, and perform compliance checks. According to loan processing staff at the Denver pilot site, working with the paper documents consumes over 90 percent of the processing time. The remaining time is used to deal with exceptions, such as notifying lenders of missing or incorrect data. Since Freddie Mac’s and Fannie Mae’s systems have automated edits and compliance checks, their staffs need to work only with exception cases. Freddie Mac’s and Fannie Mae’s systems also use mortgage scoring models to electronically perform underwriting reviews that FHA performs manually with the paper documents in the loan files. Freddie Mac, Fannie Mae, and private mortgage insurers use other models in their systems that have increased staff productivity. These include models that electronically analyze data to help them select the (1) most promising default prevention alternatives for delinquent loans and (2) repair and marketing options to minimize losses and provide the greatest returns. For example, officials of one organization stated that by using a model to determine whether repairs would increase sales proceeds, they realized $40 million of returns on $15 million of repair investments last year. Officials of another organization said their models have helped to reduce real property disposition losses by about $13,000 for each home. Officials from Fannie Mae, Freddie Mac, and the private mortgage insurers also cited efficiency improvements through the use of graphical user interfaces, integration with other systems so that needed data are readily available, and telecommunications to facilitate the transfer of data from other databases and the transmission of business correspondence. In the real property maintenance and disposition function, for example, one organization reported a 50-percent increase in the productivity of workers when the new system was implemented. According to officials, the new system’s graphical user interfaces enabled workers to quickly, easily, and electronically extract data from other systems, analyze investment options, and prepare and send correspondence by facsimile or electronic mail. The Deputy Assistant Secretary for Single Family Housing told us that FHA (1) recognizes the potential for using information technology to further improve the efficiency and effectiveness of operations and (2) intends to incorporate the best available technologies and move to a paperless work environment. However, the official added that FHA faces several challenges in making these information system improvements. For example, FHA officials stated they must deal with budget and procurement limits and the lack of skilled managers and technical staff that are necessary to quickly develop and implement the needed information systems. In this regard, as part of its efforts to improve operations, FHA officials told us that they are considering using the expertise of other organizations. For example, FHA recently entered into an agreement with Freddie Mac to use a modified version of Freddie Mac’s mortgage scoring system for loan origination. This system helps speed the lenders’ loan origination process and reduce their costs by using mortgage scoring models to more efficiently and effectively analyze risks associated with borrower credit and loan characteristics. The system is being modified for FHA’s underwriting criteria and historical experience with insured mortgages. Freddie Mac and FHA are testing the system to determine if lenders can achieve similar benefits for FHA mortgages without adversely affecting applicants who would otherwise qualify for FHA insurance. FHA has also established a process for approving lenders’ use of other automated loan origination systems. A strong system of management controls and adequate information and financial management systems are key ingredients in helping federal officials to manage operations and control risks. For many years, single family housing has had significant management control problems in its loan origination, delinquency resolution, and property disposition activities. Information system weaknesses have been cited in FMFIA compliance reports and independent audit reports as contributing factors for the last two management control weaknesses. FHA has been taking corrective actions to address these control weaknesses as part of its ongoing efforts to improve management controls. Some of these actions include the use of information technology. Because FHA is still in the planning stages for its streamlining initiatives, sufficient information is not available at this time to assess the impact that streamlining actions will have on management controls. Appendix II describes the status of efforts to address control weaknesses. Office of Single Family Housing officials recognize that FHA needs to invest in information technology to achieve the efficiency and effectiveness of leading mortgage organizations. In making future decisions on technology acquisitions, the agency can incorporate the technology investment framework established by the new Information Technology Management Reform Act of 1996 (ITMRA), which is based on industry best practices. Some of FHA’s information technology needs are described in single family housing’s 1995 Information Strategy Plan. The plan discusses FHA’s current information technology environment and shortfalls and proposes investments to provide improved management controls, expanded capabilities to analyze existing data for evaluating performance and setting policy, and expanded capabilities to automate all critical functions with state-of-the-art technology. The plan was developed using a widely accepted approach to identify needed information technology improvements, including (1) an analysis of the goals and objectives specified in the Office of Single Family Housing’s Business Strategy Plan and (2) a survey of information systems users to identify weaknesses and opportunities to automate tasks and enhance efficiency or effectiveness. In formulating the streamlining plans, the Deputy Assistant Secretary for Single Family Housing and the directors of some program areas contacted officials of Freddie Mac, Fannie Mae, and selected private insurers to discuss how their operations differ with FHA’s operations. These streamlining efforts include planning operational changes and information technology applications. The efforts have not included data collection and analysis to enable benchmarking comparisons of system support in terms of costs and performance or calculation of the benefits, costs, and potential return on investment for the information technology investments. As FHA continues its planning effort and begins sorting through its investment alternatives, effective implementation of the recently enacted ITMRA could help FHA maximize the value of its investments. Although the act was not in effect at the time FHA selected and began implementing its current initiatives, the act provides an analytical framework that will be helpful as FHA continues to streamline its operations and make improvements using information technology. The act specifies that where comparable processes and organizations exist in the public or private sectors, the agency head is to quantitatively benchmark agency process performance against such processes in terms of cost, speed, productivity, and quality of outputs and outcomes. ITMRA also requires agency heads to (1) analyze mission-related processes before making investments and (2) implement a process for maximizing the value and assessing and managing the risks of their information technology investments. The process, among other things, is to provide for the use of minimum investment selection criteria, including risk-adjusted return on investment, and specific quantitative and qualitative criteria for comparing and prioritizing alternative information systems projects. In addition to the act, the Office of Management and Budget’s information technology investment guide, issued in November 1995, establishes key elements of the investment process for agencies to follow in selecting, controlling, and evaluating their information technology investments. According to HUD’s Office of Information Technology, the Department plans to have its Technology Investment Board ensure that the investment provisions of ITMRA are implemented. HUD established the Board in fiscal year 1994 to evaluate, rank, and select proposed information technology investments for all HUD components, including FHA. The Board’s charter has been recently revised to charge it with following ITMRA capital planning and performance-based management requirements, including determining whether the functions supported by the proposed investments should be performed by the private sector or another agency. HUD plans to incorporate ITMRA investment requirements, including quantified benefit and risk management criteria, into its strategic investment process. FHA is planning to streamline its single family housing operations to increase efficiency and meet mandated staff reductions. Information technology figures prominently in the plans to support and enable the operational changes that are being contemplated. Thus far, the planned actions are consistent with, but are not as extensive as, efficiency improvement actions taken by leading mortgage industry organizations. However, the streamlining efforts are still in the early stages and, as these efforts continue, FHA will be making decisions on specific operational changes, information technology applications, and management controls that will determine the efficiency and effectiveness of operations and the achievement of staff reduction goals. In doing so, it can use the recently enacted Information Technology Management Reform Act of 1996 to establish an effective framework for making these information technology decisions. On September 13, 1996, we discussed a draft of this report with officials from FHA’s Office of Single Family Housing. In general, the officials agreed with the facts and conclusions. FHA officials suggested some clarifications to our report, and we have incorporated the suggested changes where appropriate. We are sending copies of this report to Ranking Minority Members of your Subcommittees; interested congressional committees; the Secretary of Housing and Urban Development; the Assistant Secretary for Housing-Federal Housing Commissioner; the Director, Office of Management and Budget; and other interested parties. We will also make copies of this report available to others on request. Please call me at (202) 512-6240 if you or your staffs have further questions. Major contributors to this report are listed in appendix III. As requested by the Chairs of the Subcommittee on Government Management, Information and Technology and Subcommittee on Human Resources and Intergovernmental Affairs of the House Committee on Government Reform and Oversight, our objectives were to determine (1) how FHA plans to use information technology to support the streamlining of single family housing operations and reduce staff, (2) whether FHA’s planned initiatives are similar to those undertaken by leading mortgage organizations to increase productivity, and (3) what FHA is doing to ensure that information technology initiatives will maintain or improve management controls over single family housing operations. To determine how FHA plans to use information technology to streamline single family housing operations, we identified specific reinvention initiatives planned to reduce staff, obtained an explanation of how information technology will be used for each of the reinvention initiatives, and determined the basis for the estimated staff reductions from the new uses of information technology or innovative practices. We obtained and reviewed HUD’s plan entitled, HUD Reinvention: From Blueprint To Action and the January 1996 update to the plan. To identify how FHA plans to use information technology in its streamlining initiatives, we (1) obtained a briefing from the Deputy Assistant Secretary for Single Family Housing, (2) interviewed officials in each functional area, and (3) obtained and analyzed documentation on planned streamlining initiatives, including single family housing’s July 1995 Business Strategy Plan and September 1995 Information Strategy Plan. We also reviewed provisions in HUD’s fiscal year 1996 appropriations authorization that allowed changes to FHA’s mortgage assignment program and loss mitigation operations. In addition, we reviewed and analyzed proposed regulations and instructions to lenders on new operating procedures. As part of our work to determine what information is available that the planned information technology initiatives can help achieve the projected staff reductions and efficiencies, we analyzed information from FHA’s pilot test of consolidated loan processing operations, identified information technology applications and systems used by other mortgage industry organizations, and compared FHA’s reinvented processes and systems to those of the other mortgage organizations. For the consolidated loan processing operations that were pilot tested in FHA’s Denver field office, we interviewed FHA officials, reviewed documentation on operating procedures and workload data, and observed processes and systems in operation. We interviewed officials at Fannie Mae, Freddie Mac, and the two largest private mortgage insurers in the United States—Mortgage Guaranty Insurance Corporation and GE Capital Mortgage Insurance—observed operations, and obtained documentation on processes and systems on their single family mortgage operations. We did not verify data provided by officials of these organizations concerning staff numbers, workload, productivity, and savings produced by information technology investments. We analyzed and performed general comparisons of FHA’s planned operating procedures, information systems, and staffing levels to those of the other mortgage organizations. The comparisons were performed to identify major differences and did not include detailed analyses of work processes. To ascertain what FHA has done to ensure that information technology initiatives will maintain or improve management controls over single family housing operations, we reviewed plans for proposed operations and systems to determine how they specifically address reported control weaknesses. To identify reported control weaknesses, we reviewed and analyzed HUD’s Federal Managers’ Financial Integrity Act compliance reports for fiscal years 1994 and 1995, independent auditors’ reports on FHA financial statements for fiscal years 1994 and 1995, and the HUD Inspector General’s reports on single family housing operations. We also interviewed FHA officials to obtain their views on how information technology initiatives will address management control weaknesses. We visited FHA’s Office of Single Family Housing in Washington, D.C.; FHA’s field office in Denver, Colorado; Fannie Mae in Washington, D.C.; Freddie Mac in McLean, Virginia; Mortgage Guaranty Insurance Corporation in Milwaukee, Wisconsin; and GE Capital Mortgage Insurance in Raleigh, North Carolina, and Memphis, Tennessee. We performed our work between December 1995 and August 1996 in accordance with generally accepted government auditing standards. We requested comments from the Secretary of Housing and Urban Development or his designee. On September 13, 1996, we discussed the facts and conclusions in our report with cognizant HUD officials. Their comments are discussed in the “Agency Comments” section of this report. HUD has experienced long-standing deficiencies in its internal controls and information and financial management systems. Specifically, the Office of Single Family Housing has had significant management control weaknesses in loan origination, delinquency resolution, and property disposition. While planned single family housing initiatives may help resolve management control weaknesses, insufficient information is available to assess them because detailed operating procedures and system designs have not yet been developed. In 1992, we reported inadequate oversight of loan origination and underwriting activities as a material management control weakness. The problems included fraudulent activities of borrowers, real estate agents, and lenders; approval of loans exceeding the statutory loan limit; inadequate assessment of applicants’ repayment ability; and inflated appraisals. FHA experienced high losses in the single family mortgage program because of improper loan origination activities. HUD’s FMFIA compliance report and independent auditor’s report for fiscal year 1995 discuss FHA’s actions to correct the loan origination and underwriting management control weakness. According to the FMFIA report, the control weakness has been corrected but not yet validated. The corrective actions include standardizing the monitoring of lenders’ loan underwriting practices, and establishing a mechanism to follow up and track sanctions imposed on lenders that do not adhere to FHA underwriting requirements. FHA is also planning to expand staff in the Quality Assurance Division to enhance loan origination oversight as part of its streamlining efforts. FHA plans also include a proposal for a data warehouse system to make data available on lenders to support the underwriting and quality assurance operations. In its fiscal year 1995 report, the independent auditor recommended that FHA continue and accelerate these initiatives to address the control weaknesses. Because FHA’s initiatives to correct its loan origination weaknesses—including the design of the data warehouse system—are still being planned, sufficient information is not available to assess the impact on management controls. In HUD’s FMFIA compliance reports and independent auditors’ reports for fiscal years 1994 and 1995, default monitoring and loss prevention are identified as material management control weaknesses. The FMFIA reports stated that FHA did not emphasize working with borrowers to cure defaults and delinquencies and many lenders did not report on the default status of borrowers. Contributing to the management control weaknesses is an inadequate information system to collect delinquency data and track default resolution actions. The lack of management controls has resulted in high default and foreclosure rates and a large inventory of defaulted loans. Industry experience indicates that effective monitoring of delinquent mortgages and early intervention helps those borrowers experiencing financial hardships and helps reduce losses. To correct the default monitoring and loss prevention management control weaknesses, FHA is (1) assessing penalties against lenders who are negligent in reporting defaulted mortgage loans and (2) enhancing the Single Family Default Monitoring System to track lender and servicer use of mitigation tools and provide default rates and other information for evaluating and providing feedback to lenders and servicers. Coupled with these actions, FHA established the Office of Loss Mitigation in 1995 and is implementing new loss mitigation alternatives. In assessing FHA’s efforts to improve loss mitigation operations, the independent auditor’s report on FHA’s fiscal year 1995 financial statements stated that use of the new loss mitigation alternatives should help FHA to reduce claims and losses. However, the report stated that FHA currently does not have the appropriate tools to monitor the use of the loss mitigation programs and their costs. According to an official responsible for loss mitigation operations, FHA is developing the detailed operating procedures and the design and requirements for new systems to support these operations. Since these plans have not yet been developed, it is too early to assess whether the actions will strengthen management controls. In 1992, we reported the disposition of single family foreclosed properties as a material management control weakness that resulted in financial losses. These losses and problems were part of the highly publicized HUD scandals. Among the factors contributing to the management control weakness were (1) inadequate oversight of property management, collection of sales proceeds, and services provided by third parties and (2) inadequate information system support of the disposition process. In HUD’s fiscal year 1994 FMFIA compliance report, the property disposition material weakness was listed as corrected. The corrective actions included implementation of an information system to manage the property disposition process. This issue was not identified as a control weakness in the independent auditor’s report for fiscal year 1994. Although the control weakness is now considered to be corrected, it is important to continue adequate management control over this area after it is streamlined. As discussed earlier, FHA is considering which one or more of three streamlining alternatives it will use to perform real property maintenance and disposition and foreclosed mortgage disposition activities. FHA’s decision will impact on the management controls and information systems support requirements. Until decisions are made and detailed plans are prepared, sufficient information is not available to assess how the changes will impact on management controls. Bennet Severson, Senior Evaluator Joe Sikich, Information Systems Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Federal Housing Administration's (FHA) streamlining plans, focusing on: (1) FHA plans to use information technology to support the streamlining of single-family housing operations and reduce staff; (2) similarities between FHA initiatives and those undertaken by leading mortgage organizations to increase productivity; and (3) FHA efforts to ensure that technology initiatives will maintain or improve management controls over single-family housing operations. GAO found that: (1) FHA plans to use existing information technology capabilities to facilitate some streamlining and staff reduction initiatives, while other initiatives will require new information technology applications; (2) FHA plans to reduce single family housing staff from its 1994 level of about 2,700 to 1,150 in the year 2000 by: (a) expanding the use of existing electronic data transfer capabilities and using information systems to support the consolidation of loan processing operations from 81 offices to 5 offices; (b) implementing new loss mitigation processes that will be supported with a new information system; and (c) using information technology to support new processes associated with conducting real property maintenance and disposition operations or selling defaulted mortgage notes rather than foreclosing on properties; (3) FHA plans to incorporate information technology initiatives that are similar to, but not as extensive as, those used by other mortgage industry organizations to improve productivity; (4) further improvements may be achieved if FHA adopts other automated capabilities used by these organizations; (5) some of FHA's planned changes may help resolve management control weaknesses or maintain adequate controls for loan origination, loss mitigation, and property disposition; (6) however, GAO was unable to assess the impact of the planned changes because FHA has not yet made all of the decisions, developed the detailed operating procedures, or identified the information systems requirements that will be needed to implement the planned initiatives and management controls; (7) FHA officials recognize that additional information technology investments are needed to achieve the efficiency and effectiveness of other mortgage organizations; (8) however, they added that they must deal with budget and procurement limits and technical skills shortfalls to make needed improvements; (9) in this regard, FHA is considering using the expertise of other organizations; (10) in making future technology acquisitions, FHA can take advantage of the recently enacted Information Technology Management Reform Act of 1996, which establishes a framework for information technology decisionmaking and implementation based on best industry practices.
You are an expert at summarizing long articles. Proceed to summarize the following text: Critical infrastructures are systems and assets, whether physical or virtual, so vital to our nation that their incapacity or destruction would have a debilitating impact on national security, economic well-being, pubic health or safety, or any combination of these. Critical infrastructure includes, among other things, banking and financial institutions, telecommunications networks, and energy production and transmission facilities, most of which are owned by the private sector. As these critical infrastructures have become increasingly dependent on computer systems and networks, the interconnectivity between information systems, the Internet, and other infrastructures creates opportunities for attackers to disrupt critical systems, with potentially harmful effects. Because the private sector owns most of the nation’s critical infrastructures, forming effective partnerships between the public and private sectors is vital to successfully protect cyber-reliant critical assets from a multitude of threats, including terrorists, criminals, and hostile nations. Federal law and policy have established roles and responsibilities for federal agencies to work with the private sector and other entities in enhancing the cyber and physical security of critical public and private infrastructures. These policies stress the importance of coordination between the government and the private sector to protect the nation’s computer-reliant critical infrastructure. In addition, they establish the Department of Homeland Security (DHS) as the focal point for the security of cyberspace—including analysis, warning, information sharing, vulnerability reduction, mitigation efforts, and recovery efforts for public and private critical infrastructure and information systems. Federal policy also establishes critical infrastructure sectors, assigns federal agencies to each sector (known as sector lead agencies), and encourages private sector involvement. Table 1 shows the 18 critical infrastructure sectors and the lead agencies assigned to each sector. In May 1998, Presidential Decision Directive 63 (PDD-63) established critical infrastructure protection as a national goal and presented a strategy for cooperative efforts by the government and the private sector to protect the physical and cyber-based systems essential to the minimum operations of the economy and the government. Among other things, this directive encouraged the development of information sharing and analysis centers (ISAC) to serve as mechanisms for gathering, analyzing, and disseminating information on cyber infrastructure threats and vulnerabilities to and from owners and operators of the sectors and the federal government. For example, the Financial Services, Electricity Sector, IT, and Communications ISACs represent sectors or subcomponents of sectors. The Homeland Security Act of 2002 created the Department of Homeland Security. Among other things, DHS was assigned with the following critical infrastructure protection responsibilities: (1) developing a comprehensive national plan for securing the key resources and critical infrastructures of the United States, (2) recommending measures to protect those key resources and critical infrastructures in coordination with other groups, and (3) disseminating, as appropriate, information to assist in the deterrence, prevention, and preemption of or response to terrorist attacks. In 2003, the National Strategy to Secure Cyberspace was issued, which assigned DHS multiple leadership roles and responsibilities in protecting the nation’s cyber critical infrastructure. These include (1) developing a comprehensive national plan for critical infrastructure protection; (2) developing and enhancing national cyber analysis and warning capabilities; (3) providing and coordinating incident response and recovery planning, including conducting incident response exercises; (4) identifying, assessing, and supporting efforts to reduce cyber threats and vulnerabilities, including those associated with infrastructure control systems; and (5) strengthening international cyberspace security. PDD-63 was superseded in December 2003 when Homeland Security Presidential Directive 7 (HSPD-7) was issued. HSPD-7 defined additional responsibilities for DHS, sector-specific agencies, and other departments and agencies. The directive instructs sector-specific agencies to identify, prioritize, and coordinate the protection of critical infrastructures to prevent, deter, and mitigate the effects of attacks. It also makes DHS responsible for, among other things, coordinating national critical infrastructure protection efforts and establishing uniform policies, approaches, guidelines, and methodologies for integrating federal infrastructure protection and risk management activities within and across sectors. As part of its implementation of the cyberspace strategy and other requirements to establish cyber analysis and warning capabilities for the nation, DHS established the United States Computer Emergency Readiness Team (US-CERT) to help protect the nation’s information infrastructure. US-CERT is the focal point for the government’s interaction with federal and private-sector entities 24 hours a day, 7 days a week, and provides cyber-related analysis, warning, information-sharing, major incident response, and national-level recovery efforts. Threats to systems supporting critical infrastructure are evolving and growing. In February 2011, the Director of National Intelligence testified that, in the past year, there had been a dramatic increase in malicious cyber activity targeting U.S. computers and networks, including a more than tripling of the volume of malicious software since 2009. Different types of cyber threats from numerous sources may adversely affect computers, software, networks, organizations, entire industries, or the Internet itself. Cyber threats can be unintentional or intentional. Unintentional threats can be caused by software upgrades or maintenance procedures that inadvertently disrupt systems. Intentional threats include both targeted and untargeted attacks from a variety of sources, including criminal groups, hackers, disgruntled employees, foreign nations engaged in espionage and information warfare, and terrorists. The potential impact of these threats is amplified by the connectivity between information systems, the Internet, and other infrastructures, creating opportunities for attackers to disrupt telecommunications, electrical power, and other critical services. For example, in May 2008, we reported that the Tennessee Valley Authority’s (TVA) corporate network contained security weaknesses that could lead to the disruption of control systems networks and devices connected to that network. We made 19 recommendations to improve the implementation of information security program activities for the control systems governing TVA’s critical infrastructures and 73 recommendations to address specific weaknesses in security controls. TVA concurred with the recommendations and has taken steps to implement them. As government, private sector, and personal activities continue to move to networked operations, the threat will continue to grow. Recent reports of cyber attacks illustrate that the cyber-based attacks on cyber-reliant critical infrastructures could have a debilitating impact on national and economic security. In June 2011, a major bank reported that hackers broke into its systems and gained access to the personal information of hundreds of thousands of customers. Through the bank’s online banking system, the attackers were able to view certain private customer information. In March 2011, according to the Deputy Secretary of Defense, a cyber attack on a defense company’s network captured 24,000 files containing Defense Department information. He added that nations typically launch such attacks, but there is a growing risk of terrorist groups and rogue states developing similar capabilities. In March 2011, a security company reported that it had suffered a sophisticated cyber attack that removed information about its two- factor authentication tool. According to the company, the extracted information did not enable successful direct attacks on any of its customers; however, the information could potentially be used to reduce the effectiveness of a current two-factor authentication implementation as part of a broader attack. In February 2011, media reports stated that computer hackers broke into and stole proprietary information worth millions of dollars from the networks of six U.S. and European energy companies. In July 2010, a sophisticated computer attack, known as Stuxnet, was discovered. It targeted control systems used to operate industrial processes in the energy, nuclear, and other critical sectors. It is designed to exploit a combination of vulnerabilities to gain access to its target and modify code to change the process. In January 2010, it was reported that at least 30 technology companies—most in Silicon Valley, California—were victims of intrusions. The cyber attackers infected computers with hidden programs allowing unauthorized access to files that may have included the companies’ computer security systems, crucial corporate data, and software source code. Over the past 2 years, the federal government has taken a number of steps aimed at addressing cyber threats to critical infrastructure. In early 2009, the President initiated a review of the nation’s cyberspace policy that specifically assessed the missions and activities associated with the nation’s information and communication infrastructure and issued the results in May of that year. The review resulted in 24 near- and mid- term recommendations to address organizational and policy changes to improve the current U.S. approach to cybersecurity. These included, among other things, that the President appoint a cybersecurity policy official for coordinating the nation’s cybersecurity policies and activities. In December 2009, the President appointed a Special Assistant to the President and Cybersecurity Coordinator to serve in this role and act as the central coordinator for the nation’s cybersecurity policies and activities. Among other things, this official is to chair the primary policy coordination body within the Executive Office of the President responsible for directing and overseeing issues related to achieving a reliable global information and communications infrastructure. Also in 2009, DHS issued an updated version of its National Infrastructure Protection Plan (NIPP). The NIPP is intended to provide the framework for a coordinated national approach to addressing the full range of physical, cyber, and human threats and vulnerabilities that pose risks to the nation’s critical infrastructures. The NIPP relies on a sector partnership model as the primary means of coordinating government and private-sector critical infrastructure protection efforts. Under this model, each sector has both a government council and a private sector council to address sector-specific planning and coordination. The government and private-sector councils are to work in tandem to create the context, framework, and support for the coordination and information-sharing activities required to implement and sustain each sector’s infrastructure protection efforts. The council framework allows for the involvement of representatives from all levels of government and the private sector, to facilitate collaboration and information-sharing in order to assess events accurately, formulate risk assessments, and determine appropriate protective measures. The establishment of private-sector councils is encouraged under the NIPP model, and these councils are to be the principal entities for coordinating with the government on a wide range of CIP activities and issues. Using the NIPP partnership model, the private and public sectors coordinate to manage the risks related to cyber CIP by, among other things, sharing information, providing resources, and conducting exercises. In October 2009, DHS established its National Cybersecurity and Communications Integration Center (NCCIC) to coordinate national response efforts and work directly with federal, state, local, tribal, and territorial governments and private-sector partners. The NCCIC integrates the functions of the National Cyber Security Center, US-CERT, the National Coordinating Center for Telecommunications, and the Industrial Control Systems CERT into a single coordination and integration center and co-locates other essential public and private sector cybersecurity partners. In September 2010, DHS issued an interim version of its national cyber incident response plan. The purpose of the plan is to establish the strategic framework for organizational roles, responsibilities, and actions to prepare for, respond to, and begin to coordinate recovery from a cyber incident. It aims to tie various policies and doctrine together into a single tailored, strategic, cyber-specific plan designed to assist with operational execution, planning, and preparedness activities and to guide short-term recovery efforts. DHS has also coordinated several cyber attack simulation exercises to strengthen public and private incident response capabilities. In September 2010, DHS conducted the third of its Cyber Storm exercises, which are large-scale simulations of multiple concurrent cyber attacks. (DHS previously conducted Cyber Storm exercises in 2006 and 2008.) The third Cyber Storm exercise was undertaken to test the National Cyber Incident Response Plan, and its participants included representatives from federal departments and agencies, states, ISACs, foreign countries, and the private sector. Despite the actions taken by several successive administrations and the executive branch agencies, significant challenges remain to enhancing the protection of cyber-reliant critical infrastructures. Implementing actions recommended by the president’s cybersecurity policy review. In October 2010, we reported that of the 24 near- and mid-term recommendations made by the presidentially initiated policy review to improve the current U.S. approach to cybersecurity, only 2 had been implemented and 22 were partially implemented. Officials from key agencies involved in these efforts (e.g., DHS, the Department of Defense, and the Office of Management and Budget) stated that progress had been slower than expected because agencies lacked assigned roles and responsibilities and because several of the mid-term recommendations would require action over multiple years. We recommended that the national Cybersecurity Coordinator designate roles and responsibilities for each recommendation and develop milestones and plans, including measures, to show agencies’ progress and performance.  Updating the national strategy for securing the information and communications infrastructure. In March 2009, we testified on the needed improvements to the nation’s cybersecurity strategy. In preparation for that testimony, we convened a panel of experts that included former federal officials, academics, and private-sector executives. The panel highlighted 12 key improvements that, in its view, were essential to improving the strategy and our national cybersecurity postures, including (1) the development of a national strategy that clearly articulates objectives, goals, and priorities; (2) focusing more actions on prioritizing assets and functions, assessing vulnerabilities, and reducing vulnerabilities than on developing plans; and (3) bolstering public-private partnerships though an improved value proposition and use of incentives.  Reassessing the cyber sector-specific planning approach to critical infrastructure protection. In September 2009, we reported that, among other things, sector-specific agencies had yet to update their respective sector-specific plans to fully address key DHS cyber security criteria. In addition, most agencies had not updated the actions and reported progress in implementing them as called for by DHS guidance. We noted that these shortfalls were evidence that the sector planning process has not been effective and thus leaves the nation in the position of not knowing precisely where it stands in securing cyber critical infrastructures. We recommended that DHS (1) assess whether existing sector-specific planning processes should continue to be the nation’s approach to securing cyber and other critical infrastructure and consider whether other options would provide more effective results and (2) collaborate with the sectors to develop plans that fully address cyber security requirements. DHS concurred with the recommendations and has taken action to address them. For example, the department reported that it undertook a study in 2009 that determined that the existing sector-specific planning process, in conjunction with other related efforts planned and underway, should continue to be the nation’s approach. In addition, at about this time, the department met and worked with sector officials to update sector plans with the goal of fully addressing cyber-related requirements.  Strengthening the public-private partnerships for securing cyber- critical infrastructure. The expectations of private sector stakeholders are not being met by their federal partners in areas related to sharing information about cyber-based threats to critical infrastructure. In July 2010, we reported that federal partners, such as DHS, were taking steps that may address the key expectations of the private sector, including developing new information-sharing arrangements. We also reported that public sector stakeholders believed that improvements could be made to the partnership, including improving private sector sharing of sensitive information. We recommended, among other things, that the national Cybersecurity Coordinator and DHS work with their federal and private-sector partners to enhance information-sharing efforts, including leveraging a central focal point for sharing information among the private sector, civilian government, law enforcement, the military, and the intelligence community. DHS concurred with this recommendation and officials stated that they have made progress in addressing the recommendation. We will be determining the extent of that progress as part of our audit follow-up efforts.  Enhancing cyber analysis and warning capabilities. DHS’s US-CERT has not fully addressed 15 key attributes of cyber analysis and warning capabilities that we identified. As a result, we recommended in July 2008 that the department address shortfalls associated with the 15 attributes in order to fully establish a national cyber analysis and warning capability as envisioned in the national strategy. DHS agreed in large part with our recommendations and has reported that it is taking steps to implement them. We are currently working with DHS officials to determine the status of their efforts to address these recommendations.  Addressing global cybersecurity and governance. Based on our review, the U.S. government faces a number of challenges in formulating and implementing a coherent approach to global aspects of cyberspace, including, among other things, providing top-level leadership, developing a comprehensive strategy, and ensuring cyberspace-related technical standards and policies do not pose unnecessary barriers to U.S. trade. Specifically, we determined that the national Cybersecurity Coordinator’s authority and capacity to effectively coordinate and forge a coherent national approach to cybersecurity were still under development. In addition, the U.S. government had not documented a clear vision of how the international efforts of federal entities, taken together, support overarching national goals. Further, we learned that some countries had attempted to mandate compliance with their indigenously developed cybersecurity standards in a manner that risked discriminating against U.S. companies. We recommended that, among other things, the Cybersecurity Coordinator develop with other relevant entities a comprehensive U.S. global cyberspace strategy that, among other things, addresses technical standards and policies while taking into consideration U.S. trade. In May 2011, the White House released the International Strategy for Cyberspace: Prosperity, Security, and Openness in a Networked World. We will be determining the extent that this strategy addresses our recommendation as part of our audit follow-up efforts.  Securing the modernized electricity grid. In January 2011, we reported on progress and challenges in developing, adopting, and monitoring cybersecurity guidelines for the modernized, IT-reliant electricity grid (referred to as the “smart grid”). Among other things, we identified six key challenges to securing smart grid systems. These included, among others,  a lack of security features being built into certain smart grid  a lack of an effective mechanism for sharing information on cybersecurity within the electric industry, and  a lack of electricity industry metrics for evaluating cybersecurity. We also reported that the Department of Commerce’s National Institute for Standards and Technology (NIST) had developed and issued a first version of its smart grid cybersecurity guidelines. While NIST largely addressed key cybersecurity elements that it had planned to include in the guidelines, it did not address an important element essential to securing smart grid systems that it had planned to include—addressing the risk of attacks that use both cyber and physical means. NIST officials said that they intend to update the guidelines to address the missing elements, and have drafted a plan to do so. While a positive step, the plan and schedule were still in draft form. We recommended that NIST finalize its plan and schedule for updating its cybersecurity guidelines to incorporate missing elements; NIST agreed with this recommendation. In addition to the challenges we have previously identified, we have ongoing work in two key areas related to the protection of cyber critical infrastructures. The first is to identify the extent to which cybersecurity guidance has been specified within selected critical infrastructure sectors and to identify areas of commonality and difference between sector- specific guidance and guidance applicable to federal agencies. The second is a study of risks associated with the supply chains used by federal agencies to procure IT equipment, software, or services, along with the extent to which national security-related agencies are taking risk- based approaches to supply-chain management. We plan to issue the results of this work in November 2011 and early 2012, respectively. In summary, the threats to information systems are evolving and growing, and systems supporting our nation’s critical infrastructure are not sufficiently protected to consistently thwart the threats. While actions have been taken, the administration and executive branch agencies need to address the challenges in this area to improve our nation’s cybersecurity posture, including enhancing cyber analysis and warning capabilities and strengthening the public-private partnerships for securing cyber-critical infrastructure. Until these actions are taken, our nation’s cyber critical infrastructure will remain vulnerable. Mr. Chairman, this completes my statement. I would be happy to answer any questions you or other members of the Subcommittee have at this time. If you have any questions regarding this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or [email protected]. Other key contributors to this statement include Michael Gilmore (Assistant Director), Bradley Becker, Kami Corbett, and Lee McCracken. Cybersecurity: Continued Attention Needed to Protect Our Nation’s Critical Infrastructure and Federal Information Systems. GAO-11-463T. Washington, D.C.: March 16, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Electricity Grid Modernization: Progress Being Made on Cybersecurity Guidelines, but Key Challenges Remain to be Addressed. GAO-11-117. Washington, D.C.: January 12, 2011. Information Security: Federal Agencies Have Taken Steps to Secure Wireless Networks, but Further Actions Can Mitigate Risk. GAO-11-43. Washington, D.C.: November 30, 2010. Cyberspace Policy: Executive Branch Is Making Progress Implementing 2009 Policy Review Recommendations, but Sustained Leadership Is Needed. GAO-11-24. Washington, D.C.: October 6, 2010. Information Security: Progress Made on Harmonizing Policies and Guidance for National Security and Non-National Security Systems. GAO-10-916. Washington, D.C.: September 15, 2010. Information Management: Challenges in Federal Agencies’ Use of Web 2.0 Technologies. GAO-10-872T. Washington, D.C.: July 22, 2010. Critical Infrastructure Protection: Key Private and Public Cyber Expectations Need to Be Consistently Addressed. GAO-10-628. Washington, D.C.: July 15, 2010. Cyberspace: United States Faces Challenges in Addressing Global Cybersecurity and Governance. GAO-10-606. Washington, D.C.: July 2, 2010. Cybersecurity: Continued Attention Is Needed to Protect Federal Information Systems from Evolving Threats. GAO-10-834T. Washington, D.C.: June 16, 2010. Cybersecurity: Key Challenges Need to Be Addressed to Improve Research and Development. GAO-10-466. Washington, D.C.: June 3, 2010. Information Security: Federal Guidance Needed to Address Control Issues with Implementing Cloud Computing. GAO-10-513. Washington, D.C.: May 27, 2010. Cybersecurity: Progress Made but Challenges Remain in Defining and Coordinating the Comprehensive National Initiative. GAO-10-338. Washington, D.C.: March 5, 2010. Critical Infrastructure Protection: DHS Needs to Fully Address Lessons Learned from Its First Cyber Storm Exercise. GAO-08-825. Washington, D.C.: September 9, 2008. Information Security: TVA Needs to Address Weaknesses in Control Systems and Networks. GAO-08-526. Washington, D.C.: May 21, 2008. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Increasing computer interconnectivity, such as the growth of the Internet, has revolutionized the way our government, our nation, and much of the world communicate and conduct business. However, this widespread interconnectivity poses significant risks to the government's and the nation's computer systems, and to the critical infrastructures they support. These critical infrastructures include systems and assets--both physical and virtual--that are essential to the nation's security, economic prosperity, and public health, such as financial institutions, telecommunications networks, and energy production and transmission facilities. Because most of these infrastructures are owned by the private sector, establishing effective public-private partnerships is essential to securing them from pervasive cyber-based threats. Federal law and policy call for federal entities, such as the Department of Homeland Security (DHS), to work with private-sector partners to enhance the physical and cyber security of these critical infrastructures. GAO is providing a statement describing (1) cyber threats facing cyber-reliant critical infrastructures; (2) recent actions the federal government has taken, in partnership with the private sector, to identify and protect cyber-reliant critical infrastructures; and (3) ongoing challenges to protecting these infrastructures. In preparing this statement, GAO relied on its previously published work in the area. The threats to systems supporting critical infrastructures are evolving and growing. In a February 2011 testimony, the Director of National Intelligence noted that there has been a dramatic increase in cyber activity targeting U.S. computers and systems in the last year, including a more than tripling of the volume of malicious software since 2009. Varying types of threats from numerous sources can adversely affect computers, software, networks, organizations, entire industries, or the Internet itself. These include both unintentional and intentional threats, and may come in the form of targeted or untargeted attacks from criminal groups, hackers, disgruntled employees, hostile nations, or terrorists. The interconnectivity between information systems, the Internet, and other infrastructures can amplify the impact of these threats, potentially affecting the operations of critical infrastructure, the security of sensitive information, and the flow of commerce. Recent reported incidents include hackers accessing the personal information of hundreds of thousands of customers of a major U.S. bank and a sophisticated computer attack targeting control systems used to operate industrial processes in the energy, nuclear, and other critical sectors. Over the past 2 years, the federal government, in partnership with the private sector, has taken a number of steps to address threats to cyber critical infrastructure. In early 2009, the White House conducted a review of the nation's cyberspace policy that addressed the missions and activities associated with the nation's information and communications infrastructure. The results of the review led, among other things, to the appointment of a national Cybersecurity Coordinator with responsibility for coordinating the nation's cybersecurity policies and activities. Also in 2009, DHS updated its National Infrastructure Protection Plan, which provides a framework for addressing threats to critical infrastructures and relies on a public-private partnership model for carrying out these efforts. DHS has also established a communications center to coordinate national response efforts to cyber attacks and work directly with other levels of government and the private sector and has conducted several cyber attack simulation exercises. Despite recent actions taken, a number of significant challenges remain to enhancing the security of cyber-reliant critical infrastructures, such as (1) implementing actions recommended by the president's cybersecurity policy review; (2) updating the national strategy for securing the information and communications infrastructure; (3) reassessing DHS's planning approach to critical infrastructure protection; (4) strengthening public-private partnerships, particularly for information sharing; (5) enhancing the national capability for cyber warning and analysis; (6) addressing global aspects of cybersecurity and governance; and (7)securing the modernized electricity grid, referred to as the "smart grid." In prior reports, GAO has made many recommendations to address these challenges. GAO also continues to identify protecting the nation's cyber critical infrastructure as a governmentwide high-risk area.
You are an expert at summarizing long articles. Proceed to summarize the following text: The AEA, as amended, sets forth the procedures and requirements for the U.S. government’s negotiating, proposing, and entering into nuclear cooperation agreements with foreign partners. The AEA, as amended, requires that U.S. peaceful nuclear cooperation agreements contain the following nine provisions: 1. Safeguards: Safeguards, as agreed to by the parties, are to be maintained over all nuclear material and equipment transferred, and all special nuclear material used in or produced through the use of such nuclear material and equipment, as long as the material or equipment remains under the jurisdiction or control of the cooperating party, irrespective of the duration of other provisions in the agreement or whether the agreement is terminated or suspended for any reason. Such safeguards are known as “safeguards in perpetuity.” 2. Full-scope IAEA safeguards as a condition of supply: In the case of non-nuclear weapons states, continued U.S. nuclear supply is to be conditioned on the maintenance of IAEA “full-scope” safeguards over all nuclear materials in all peaceful nuclear activities within the territory, under the jurisdiction, or subject to the control of the cooperating party. 3. Peaceful use guaranty: The cooperating party must guarantee that it will not use the transferred nuclear materials, equipment, or sensitive nuclear technology, or any special nuclear material produced through the use of such, for any nuclear explosive device, for research on or development of any nuclear explosive device, or for any other military purpose. 4. Right to require return: An agreement with a non-nuclear weapon state must stipulate that the United States has the right to require the return of any transferred nuclear materials and equipment, and any special nuclear material produced through the use thereof, if the cooperating party detonates a nuclear device, or terminates or abrogates an agreement providing for IAEA safeguards. 5. Physical security: The cooperating party must guarantee that it will maintain adequate physical security for transferred nuclear material and any special nuclear material used in or produced through the use of any material, or production or utilization facilities transferred pursuant to the agreement. 6. Retransfer rights: The cooperating party must guarantee that it will not transfer any material, Restricted Data, or any production or utilization facility transferred pursuant to the agreement, or any special nuclear material subsequently produced through the use of any such transferred material, or facilities, to unauthorized persons or beyond its jurisdiction or control, without the consent of the United States. 7. Restrictions on enrichment or reprocessing of U.S.-obligated material: The cooperating party must guarantee that no material transferred, or used in, or produced through the use of transferred material or production or utilization facilities, will be reprocessed or enriched, or with respect to plutonium, uranium-233, HEU, or irradiated nuclear materials, otherwise altered in form or content without the prior approval of the United States. 8. Storage facility approval: The cooperating party must guarantee not to store any plutonium, uranium-233, or HEU that was transferred pursuant to a cooperation agreement, or recovered from any source or special nuclear material transferred, or from any source or special nuclear material used in a production facility or utilization facility transferred pursuant to the cooperation agreement, in a facility that has not been approved in advance by the United States. 9. Additional restrictions: The cooperating party must guarantee that any special nuclear material, production facility, or utilization facility produced or constructed under the jurisdiction of the cooperating party by or through the use of transferred sensitive nuclear technology, will be subject to all the requirements listed above. In addition, the United States is a party to the Treaty on the Non- Proliferation of Nuclear Weapons (NPT). The NPT binds each of the treaty’s signatory states that had not manufactured and exploded a nuclear weapon or other nuclear explosive device prior to January 1, 1967 (referred to as non-nuclear weapon states) to accept safeguards as set forth in an agreement to be concluded with IAEA. Under the safeguards system, IAEA, among other things, inspects facilities and locations containing nuclear material, as declared by each country, to verify its peaceful use. IAEA standards for safeguards agreements provide that the agreements should commit parties to establish and maintain a system of accounting for nuclear material, with a view to preventing diversion of nuclear energy from peaceful uses, and reporting certain data to IAEA. IAEA’s security guidelines provide the basis by which the United States and other countries generally classify the categories of protection that should be afforded nuclear material, based on the type, quantity, and enrichment of the nuclear material. For example, Category I material is defined as 2 kilograms or more of unirradiated or “separated” plutonium or 5 kilograms of uranium-235 contained in unirradiated or “fresh” HEU and has the most stringent set of recommended physical protection measures. The recommended physical protection measures for Category II and Category III nuclear materials are less stringent. Appendix III contains further details on the categorization of nuclear material. DOE, NRC, and State are not able to fully account for U.S. nuclear material overseas that is subject to nuclear cooperation agreement terms because the agreements do not stipulate systematic reporting of such information, and there is no U.S. policy to pursue or obtain such information. Section 123 of the AEA, as amended, does not require nuclear cooperation agreements to contain provisions stipulating that partners report information on the amount, status, or location (facility) of special nuclear material subject to the agreement terms. However, U.S. nuclear cooperation agreements generally require that partners report inventory information upon request, although DOE and NRC have not systematically sought such data. We requested from multiple offices at DOE and NRC a current and comprehensive inventory of U.S. nuclear material overseas, to include country, site, or facility, and whether the quantity of material was rated as Category I or Category II material. However, neither agency has provided such an inventory. NMMSS does not contain the data necessary to maintain an inventory of U.S. special nuclear material overseas. DOE, NRC, and State have not pursued annual inventory reconciliations of nuclear material subject to U.S. cooperation agreement terms with all foreign partners that would provide the U.S. government with better information about where such material is held. Furthermore, according to DOE, NRC, and State officials, no U.S. law or policy directs U.S. agencies to obtain information regarding the location and disposition of U.S. nuclear material at foreign facilities. Section 123 of the AEA, as amended, does not require nuclear cooperation agreements to contain provisions stipulating that partners report information on the amount, status, or location (facility) of special nuclear material subject to the agreement terms. However, the texts of most U.S. nuclear cooperation agreements contain a provision calling for each partner to maintain a system of material accounting and control and to do so consistent with IAEA safeguards standards or agreements. In addition, we found that all agreements, except three negotiated prior to 1978 and the U.S.-China agreement, contain a provision that the other party shall report, or shall authorize the IAEA to report, inventory information upon request. However, according to DOE and NRC officials, with the exception of the administrative arrangements with five partners, the United States has not requested such information from all partners on an annual or systematic basis. Nonetheless, the AEA requires U.S. nuclear cooperation agreements to include terms that, among other things, obligate partners to obtain U.S. approval for the transfer, retransfer, enrichment and reprocessing, and the storage of U.S.-obligated uranium-233, HEU, or other nuclear materials that have been irradiated. In addition, according to DOE and NRC officials, the United States obtains written assurances from partners in advance of each transfer of U.S. nuclear material that commits them to maintain the transferred nuclear material according to the terms of its nuclear cooperation agreement with the United States. DOE and NRC officials told us these assurances help the United States ensure that partner countries comply with the terms of the nuclear cooperation agreement. In addition, IAEA, DOE, NRC, and State officials told us that IAEA’s safeguards activities provide a level of assurance that nuclear material is accounted for at partner facilities. The safeguards system, which has been a cornerstone of U.S. efforts to prevent nuclear proliferation, allows IAEA to independently verify that non-nuclear weapons states that signed the NPT are complying with its requirements. Under the safeguards system, IAEA, among other things, inspects facilities and locations containing nuclear material declared by countries to verify its peaceful use. Inspectors from IAEA’s Department of Safeguards verify that the quantities of nuclear material that these non-nuclear weapons states declared to IAEA are not diverted for other uses. IAEA considers such information confidential and does not share it with its member states, including the United States, unless the parties have agreed that IAEA can share the information. IAEA’s inspectors do not verify nuclear material by country of origin or associated obligation. DOE, State, and IAEA officials told us that, because IAEA does not track the obligation of the material under safeguards, IAEA may notice discrepancies in nuclear material balances through periodic reviews of countries’ shipping records. However, these officials said that IAEA does not have the ability to identify whether and what volume of nuclear material at partner country facilities is U.S.- obligated and therefore subject to the terms of U.S. nuclear cooperation agreements. DOE and NRC do not have a comprehensive, detailed, current inventory of U.S. nuclear material overseas that would enable the United States to identify material subject to U.S. nuclear cooperation agreement terms. We requested from multiple offices at DOE and NRC a current and comprehensive inventory of U.S. nuclear material overseas, to include country, site, or facility, and whether the quantity of material was Category I or Category II. However, the agencies have not provided such a list. DOE officials from the Office of Nonproliferation and International Security told us that they have multiple mechanisms to account for the amount of U.S.-obligated nuclear material at foreign facilities. They stated that they use NMMSS records to obtain information regarding U.S. nuclear material inventories held in other countries. However, NMMSS officials told us that NMMSS was an accurate record of material exports from the United States, but that it should not be used to estimate current inventories. In addition, NMMSS officials stated that DOE’s GTRI program has good data regarding the location of U.S. nuclear material overseas and that this information should be reconciled with NMMSS data. However, when we requested information regarding the amount of U.S. material at partner facilities, GTRI stated that they could not report on the amount of U.S. nuclear material remaining at facilities unless it was scheduled for GTRI to return. In addition, in February 2011 written comments to us, GTRI stated it was not responsible for acquiring or maintaining inventory information regarding U.S. nuclear material overseas. A long-time contract employee for DOE’s Office of Nonproliferation and International Security stated he has tried to collect information regarding U.S. nuclear material overseas from various sources including a list of countries eligible for GTRI’s fuel return program, NMMSS, and other sources, but it is not possible to reconcile information from the various lists and sources and consequently there is no list of U.S. inventories overseas. According to public information, the United States has additional measures known as administrative arrangements with five of its trading partners to conduct annual reconciliations of nuclear material amounts. In addition, for all partners, DOE and NRC officials told us that an exchange of diplomatic notes is sent prior to any transfer to ensure that U.S. nuclear material is not diverted for non-peaceful purposes, and which binds the partner to comply with the terms of the nuclear cooperation agreement. However, the measures cited by DOE are not comprehensive or sufficiently detailed to provide the specific location of U.S. nuclear material overseas. NRC and DOE could not fully account for U.S. exports of HEU in response to a congressional mandate that the agencies report on the current location and disposition of U.S. HEU overseas. In 1992, Congress mandated that NRC, in consultation with other relevant agencies, submit to Congress a report detailing the current status of previous U.S. exports of HEU, including its location, disposition (status), and how it had been used. The January 1993 report that NRC produced in response to the mandate stated it was not possible to reconcile this information from available U.S. sources of data with all foreign holders of U.S. HEU within the 90-day period specified in the act. The report further states that a thorough reconciliation of U.S and foreign records with respect to end use could require several months of additional effort, assuming that EURATOM would agree to participate. According to DOE and NRC officials, no further update to the report was issued, and the U.S. government has not subsequently attempted to develop such a comprehensive estimate of the location and status of U.S. HEU overseas. The 1993 report provided estimated material balances based on the transfer, receipt, or other adjustments reported to the NMMSS and other U.S. agencies. The report stated that the estimated material balances should match partners’ reported inventories. However, the report did not compare the balances or explain the differences. Our analysis of other documentation associated with the report shows that NRC, in consultation with U.S. agencies, was able to verify the location of 1,160 kilograms out of an estimated 17,500 kilograms of U.S. HEU remaining overseas as of January 1993. NRC’s estimates matched partner estimates in 22 cases; did not match partner estimates in 6 cases; and, in 8 cases, partners did not respond in time to NRC’s request. The 1993 report noted that, in cases where U.S. estimates did not match partners’ inventory reports, “reconciliation efforts are underway.” However, DOE, NRC, and NMMSS officials told us that no further report was issued. In addition, NMMSS officials told us that they were unaware of any subsequent efforts to reconcile U.S. estimates with partners’ reports, or update the January 1993 report. In addition, we found no indication that DOE, NMMSS, or NRC officials have updated the January 1993 report, or undertaken a comprehensive accounting of U.S. nuclear material overseas. We found that NMMSS does not contain the data necessary to maintain an inventory of U.S. nuclear material overseas subject to U.S. nuclear cooperation agreements. According to NRC documents, NMMSS is part of an overall program to help satisfy the United States’ accounting, controlling, and reporting obligations to IAEA and its nuclear trading partners. NMMSS, the official central repository of information on domestic inventories and exports of U.S. nuclear material, contains current and historic data on the possession, use, and shipment of nuclear material. It includes data on U.S.-supplied nuclear material transactions with other countries and international organizations, foreign contracts, import/export licenses, government-to-government approvals, and other DOE authorizations such as authorizations to retransfer U.S. nuclear material between foreign countries. DOE and NRC officials told us that NMMSS contains the best available information regarding U.S. exports and retransfers of special nuclear material. DOE and NRC do not collect data necessary for NMMSS to keep an accurate inventory of U.S. nuclear material overseas. According to NRC officials, NMMSS cannot track U.S. nuclear material overseas because data regarding the current location and status of U.S. nuclear material, such as irradiation, decay, burn up, or production, are not collected. NMMSS only contains data on domestic inventories and transaction receipts from imports and exports reported by domestic nuclear facilities and some retransfers reported by partners to the United States and added to the system by DOE. Therefore, while the 1995 Nuclear Proliferation Assessment Statement accompanying the U.S.-EURATOM agreement estimated 250 tons of U.S.-obligated plutonium are planned to be separated from spent power reactor fuel in Europe and Japan for use in civilian energy programs in the next 10 to 20 years, our review indicates that the United States would not be able to identify the European countries or facilities where such U.S.-obligated material is located. DOE, NRC, and State have not pursued annual inventory reconciliations of nuclear material subject to U.S. nuclear cooperation agreement terms with all partners that would provide the U.S. government with better information about where such material is held overseas. Specifically, once a nuclear cooperation agreement is concluded, U.S. government officials—generally led by DOE—and partner country officials may negotiate an administrative arrangement for an annual inventory reconciliation to exchange information regarding each country’s nuclear material accounting balances. Inventory reconciliations typically compare the countries’ data and material transfer and retransfer records, and can help account for material consumed or irradiated by reactors. Government officials from several leading nuclear material exporting and importing countries told us that they have negotiated with all their other partners to exchange annual inventory reconciliations to provide a common understanding of the amount of their special material held by another country or within their country. For example, Australia, which exports about 13 percent of the world’s uranium each year, conducts annual reconciliations with each of its partners, and reports annually to the Australian Parliament regarding the location and disposition of all Australian nuclear material. NRC officials told us that Australia has some of the strictest reporting requirements for its nuclear material. The United States conducts annual inventory reconciliations with five partners but does not conduct inventory reconciliations with the other partners it has transferred material to or trades with. According to DOE officials, for the five reconciliations currently conducted, NMMSS data are compared with the partner’s records and, if warranted, each country’s records are adjusted, where necessary, to reflect the current status of U.S special nuclear material. As of February 2011, the United States conducted bilateral annual exchanges of total material balances for special nuclear materials with five partners. Of these partners, the United States exchanges detailed information regarding inventories at each specific facility only with one partner. DOE officials noted that they exchange information with particular trading partners on a transactional basis during the reporting year and work with the partners at that time to resolve any potential discrepancies that may arise. In the case of EURATOM, material information is reported as the cumulative total of all 27 EURATOM members. For the purposes of nuclear cooperation with the United States, EURATOM is treated as one entity rather than its 27 constituent parts. None of the 27 EURATOM member states have bilateral nuclear cooperation agreements in force with the United States. According to a 2010 DOE presentation for NMMSS users, the difference in reporting requirements results in a 69-page report for Japan and a 1-page report for EURATOM. In addition, information exchanged with other trading partners also is not reported by facility. DOE and NRC officials told us that the United States may not have accurate information regarding the inventories of U.S. nuclear material held by its 21 other partners. DOE officials told us that, in addition to benefits, there were costs to pursuing facility-by-facility reconciliations and reporting. In particular, DOE officials told us they have not pursued facility-by-facility accounting in annual reconciliations with other partners because it would be difficult for the United States to supply such detailed information regarding partner material held in U.S. facilities. DOE and NRC officials told us this would also create an administrative burden for the United States. According to DOE officials, the relative burden with which the United States can perform facility-by-facility accounting by foreign trading partner varies greatly based on the amount of material in the United States that is obligated to such partners. For example, the United States can perform facility-by-facility accounting with one country, because U.S. officials told us there is not much of that country’s nuclear material in the United States. However, if the United States were to conduct facility-by-facility accounting with Australia, it would create burdensome reporting requirements. Specifically, according to DOE officials, Australia would have to report to the United States on the status of a few facilities holding U.S. nuclear material, but the United States would be required to report on hundreds of U.S. facilities holding Australian nuclear material. Without information on foreign facilities, however, it may be difficult to track U.S. nuclear materials for accounting and control purposes. DOE, NRC, and State officials told us neither U.S. law nor U.S. policy explicitly requires the United States to track U.S. special nuclear material overseas. Moreover, U.S. law does not require peaceful nuclear cooperation agreements to require cooperating parties to provide reports to the United States of nuclear material on a facility-by-facility basis. A March 2002 DOE Inspector General’s audit raised concerns about the U.S. government’s ability to track sealed sources, which could contain nuclear or radioactive material. In response to the audit’s findings, NNSA’s Associate Administrator for Management and Administration wrote that “While it is a good idea to be aware of the locations and conditions of any material, it is not the current policy of the U.S. government.” Furthermore, the Associate Administrator asserted that various U.S. government agencies, including State, DOE, and NRC, would need to be involved should DOE change its policy and undertake an initiative to track the location and condition of U.S. sealed sources in foreign countries. Similarly, DOE, NRC, and State officials told us that if it became the policy of the U.S. government to track nuclear material overseas—and in particular, by facility—then requirements would have to be negotiated into the nuclear cooperation agreements or the associated administrative arrangements. NMMSS officials told us that NMMSS is currently capable of maintaining information regarding inventories of U.S. nuclear material overseas. However, as we reported in 1982, NMMSS information is not designed to track the location (facility) or the status—such as whether the material is irradiated or unirradiated, fabricated into fuel, burned up, or reprocessed. As a result, NMMSS neither identifies where U.S. material is located overseas nor maintains a comprehensive inventory of U.S.- obligated material. In addition, NMMSS officials emphasized that this information would need to be systematically reported. According to these officials, such reporting is not done on a regular basis by other DOE offices and State. In some instances, State receives a written notice of a material transfer at its embassies and then transmits this notice to DOE. Officials from DOE’s Office of Nonproliferation and International Security told us that, while they could attempt to account for U.S. material overseas on a case-by-case basis, obtaining the information to systematically track this material would require renegotiating the terms of nuclear cooperation agreements. DOE has recently issued proposed guidance clarifying the role of DOE offices for maintaining and controlling U.S. nuclear material. An October 2010 draft DOE order states that DOE “Manages the development and maintenance of NMMSS by: (a) collecting data relative to nuclear materials including those for which the United States has a safeguards interest both domestically and abroad; (b) processing the data; and (c) issuing reports to support the safeguards and management needs of DOE and NRC, and other government organizations, including those associated with international treaties and organizations.” However, we did not find any evidence that DOE will be able to meet those responsibilities in the current configuration of NMMSS without obtaining additional information from partners and additional and systematic data sharing among DOE offices. Nuclear cooperation agreements do not contain specific access rights that enable DOE, NRC, or State to monitor and evaluate the physical security of U.S. nuclear material overseas, and the United States relies on partners to maintain adequate security. In the absence of specific access rights, DOE, NRC, and State have jointly conducted interagency physical protection visits to monitor and evaluate the physical security of nuclear material when given permission by the partner country. However, the interagency physical protection teams have neither systematically visited countries believed to be holding Category I quantities of U.S. nuclear material, nor have they systematically revisited facilities determined to not be meeting IAEA security guidelines in a timely manner. DOE’s, NRC’s, and State’s ability to monitor and evaluate whether material subject to U.S. nuclear cooperation agreement terms is physically secure is contingent on partners granting access to facilities where such material is stored. Countries, including the United States, believe that the physical protection of nuclear materials is a national responsibility. This principle is reflected both in IAEA’s guidelines on the “Physical Protection of Nuclear Material and Nuclear Facilities” and in pending amendments to the Convention on the Physical Protection of Nuclear Material. Our review of section 123 of the AEA and all U.S. nuclear cooperation agreements currently in force found that they do not explicitly include a provision granting the United States access to verify the physical protection of facilities or sites holding material subject to U.S. nuclear cooperation agreement terms. However, in accordance with the AEA, as amended, all nuclear cooperation agreements, excepting three negotiated prior to 1978, contain provisions requiring both partners to maintain adequate physical security over transferred material. The AEA, as amended, requires that the cooperating party must guarantee that it will maintain adequate physical security for transferred nuclear material and any special nuclear material used in or produced through the use of any material, or production, or utilization facility transferred pursuant to the agreement. However, it does not specify that State, in cooperation with other U.S. agencies, negotiates agreement terms that must include rights of access or other measures for the United States to verify whether a partner is maintaining adequate physical security over U.S. material. Our review of the texts of all 27 U.S. nuclear cooperation agreements in force found that most of them contain a provision providing that the adequacy of physical protection measures shall be subject to review and consultations by the parties. However, none of the agreements include specific provisions stipulating that the United States has the right to verify whether a partner is adequately securing U.S. nuclear material. As a result, several DOE and State officials told us the United States’ ability to monitor and evaluate the physical security of U.S. nuclear material overseas is contingent on partners’ cooperation and access to facilities where U.S. material is stored. State, DOE, and NRC officials told us that they rely on partners to comply with IAEA’s security guidelines for physical protection. However, the guidelines, which are voluntary, do not provide for access rights for other states to verify whether physical protection measures for nuclear material are adequate. IAEA’s security guideline document states that the “responsibility for establishing and operating a comprehensive physical protection system for nuclear materials and facilities within a State rests entirely with the Government of that State.” In addition, according to the guidelines, member states should ensure that their national laws provide for the proper implementation of physical protection and verify continued compliance with physical protection regulations. For example, according to IAEA’s security guidelines, a comprehensive physical protection system to secure nuclear material should include, among other things, technical measures such as vaults, perimeter barriers, intrusion sensors, and alarms;  material control procedures; and  adequately equipped and appropriately trained guard and emergency response forces. In addition, according to DOE and State officials, key international treaties, including the Convention on the Physical Protection of Nuclear Material—which calls for signatory states to provide adequate physical protection of nuclear material while in international transit—do not provide states the right to verify the adequacy of physical protection measures. A senior official from IAEA’s Office of Nuclear Security told us that physical security is a national responsibility and that governments may choose to organize their various physical security components differently, as long as the components add up to an effective regime. Despite these constraints on access, the U.S. government can take certain actions to protect U.S. nuclear material located at foreign facilities. For example, NRC licensing for the export of nuclear equipment and material is conditioned on partner maintenance of adequate physical security. NRC officials stated that, when an export license application for nuclear materials or equipment is submitted, the U.S. government seeks confirmation, in the form of peaceful use assurances, from the foreign government that the material and equipment, if exported, will be subject to the terms and conditions of that government’s nuclear cooperation agreement with the United States. In addition, NRC officials stated that this government-to-government reconfirmation of the terms and conditions of the agreement meets the “letter and spirit” of the AEA and Nuclear Non-Proliferation Act of 1978 (NNPA) and underscores that the partner is aware of and accepts the terms and conditions of the agreement. NRC officials also noted that the NNPA amendments to the AEA were designed and intended to encourage foreign governments to agree to U.S. nonproliferation criteria in exchange for nuclear commodities. However, the AEA does not empower the U.S. government through inspections or other means to enforce foreign government compliance with nuclear cooperation agreements once U.S. nuclear commodities are in a foreign country. Importantly, according to NRC, the onus is on the receiving country as a sovereign right and responsibility and consistent with its national laws and international commitments, to adequately secure the nuclear material. According to DOE and State, as well as foreign government officials, the United States and the partner share a strong common interest in deterring and preventing the misuse of nuclear material, as well as an interest in maintaining the rights afforded to sovereign countries. The partner’s interest in applying adequate security measures, for instance, is particularly strong because the nuclear material is located within its territory. Moreover, specific physical security needs may often depend on unique circumstances and sensitive intelligence information known only to the partner. In addition, the AEA requires that U.S. nuclear cooperation agreements with non-nuclear weapon states contain a stipulation that the United States shall have the right to require the return of certain nuclear material, as well as equipment, should the partner detonate a nuclear device or terminate or abrogate its safeguards agreements with IAEA. However, DOE, NRC, and State officials told us that the U.S. government has never exercised the “right to require return” provisions in its nuclear cooperation agreements. In addition, the United States typically includes “fall-back safeguards”—contingency plans for the application of alternative safeguards should IAEA safeguards become inapplicable for any other reason. DOE and State officials told us, however, that the United States has not exercised its fall-back safeguards provisions, because the United States has not identified a situation where IAEA was unable to perform its safeguards duties. U.S. agencies have, over time, made arrangements with partners to visit certain facilities where U.S. nuclear material is stored. As we reported in August 1982 and in December 1994, U.S. interagency physical protection teams visit partner country facilities to monitor and evaluate whether the physical protection provided to U.S. nuclear material meets IAEA physical security guidelines. In 1974, DOE’s predecessor, the Energy Research and Development Administration, began leading teams composed of State, NRC, and DOE national laboratory officials to review the partner’s legal and regulatory basis for physical protection and to ensure that U.S. nuclear material was adequately protected. In 1988, the Department of Defense’s Defense Threat Reduction Agency began to participate in these visits, and officials from other agencies and offices, such as GTRI, have participated. The visits have generally focused on research reactors containing HEU but have also included assessments, when partners voluntarily grant access, of other facilities’ physical security, including nuclear power plants, reprocessing facilities, and research and development facilities containing U.S. nuclear material. According to DOE documents and DOE, NRC, and State officials, the primary factors for selecting countries for visits are the type, quantity, and form of nuclear material, with priority given to countries with U.S. HEU or plutonium in Category I amounts. In addition, in 1987, NRC recommended that countries possessing U.S. Category I nuclear material be revisited at least every 5 years. DOE and NRC officials told us this has become an official goal for prioritizing visits. According to DOE, interagency physical protection visits are also made whenever the country has had or expects to have a significant change in its U.S. nuclear material inventory, along with other factors, such as previous findings that physical protection was not adequate. These criteria and other factors are used to help U.S. agencies prioritize visits on a countrywide basis and also supplement other information that is known about a partner’s physical protection system and the current threat environment. Moreover, while the U.S. physical protection program assesses physical security conditions on a site-specific basis, NRC’s regulations permit the determination of adequacy of foreign physical protection systems on a countrywide basis. Therefore, DOE, NRC, and State officials told us that the results of the interagency physical protection visits, combined with other sources of information such as country threat assessments, are used as a measure of the physical security system countrywide. The U.S. teams visit certain facilities where U.S. nuclear material is used or stored to observe physical protection measures after discussing the relevant nuclear security regulatory framework with the partner government. DOE and State officials told us these physical protection visits help U.S. officials develop relationships with partner officials, share best practices and, in some cases, recommend physical security improvements. We visited four facilities that hold U.S.-obligated nuclear material. The partner officials and facility operators we met shared their observations regarding the U.S. physical protection visits. Representatives from one site characterized a recent interagency physical protection visit as a “tour.” These officials told us the U.S. government officials had shared some high-level observations regarding their visit with government officials and nuclear reactor site operators but did not provide the government or site operators with written observations or recommendations. On the other hand, government officials from another country we visited told us that a recent interagency physical protection visit had resulted in a useful and detailed exchange of information about physical security procedures. These government officials told us they had learned “quite a lot” from the interagency physical protection visit and that they hoped the dialogue would continue, since security could always be improved. In February 2011, DOE officials told us they had begun to distribute the briefing slides they use at the conclusion of a physical protection visit to foreign officials. State officials told us that the briefings are considered government-to-government activities, and it is the partner government’s choice on whether to include facility operators in the briefings. In addition, we reviewed U.S. agencies’ records of these and other physical protection visits and found that, over the 17-year period from 1994 through 2010, U.S. interagency physical protection teams made 55 visits. Of the 55 visits, interagency physical protection teams found the sites met IAEA security guidelines on 27 visits, did not meet IAEA security guidelines on 21 visits, and the results of 7 visits are unknown because the physical protection team was unable to assess the sites, or agency documentation was missing. According to DOE, State, and NRC officials, the visits are used to encourage security improvements by the partner. For example, based on the circumstances of one particular facility visited in the last 5 years, the physical protection team made several recommendations to improve security, including installing (1) fences around the site’s perimeter, (2) sensors between fences, (3) video assessment systems for those sensors, and (4) vehicle barriers. According to DOE officials, these observations were taken seriously by the country, which subsequently made the improvements. When we visited the site as part of our review, government officials from that country told us the U.S. interagency team had provided useful advice and, as a result, the government had approved a new physical protection plan. These government officials characterized their interactions with DOE and other U.S. agency officials as positive and told us that the government’s new physical protection plan had been partly implemented. Moreover, although we were not granted access to the building, we observed several physical protection upgrades already implemented or in progress, including: (1) the stationing of an armed guard outside the facility holding U.S. Category I material; (2) ongoing construction of a 12- foot perimeter fence around the facility; and (3) construction of a fence equipped with barbed wire and motion detectors around the entire research complex. We were also told that, among other things, remote monitoring equipment had been installed in key areas in response to the interagency visit. The Central Alarm Station was hardened, and the entrance to the complex was controlled by turnstiles and a specially issued badge, which entrants received after supplying a passport or other government-issued identification. Private automobiles were not allowed in the facility. Not all U.S. physical protection visits proceed smoothly. In some cases, U.S. agencies have attempted repeatedly to convince partner officials of the seriousness of meeting IAEA security guidelines and to fund improvements. For example, a U.S. interagency physical protection team in the early 2000s found numerous security problems at a certain country’s research reactor. The site supervisor objected to the interagency team’s assessment because physical security was a matter of national sovereignty, and IAEA security guidelines were subject to interpretation. The site supervisor also objected to some of the U.S. team’s recommendations. In some instances, under U.S. pressure, countries have agreed to make necessary improvements with DOE technical and material assistance. Our review of agency records indicates that, in recent years, as the number of countries relying on U.S. HEU to fuel research reactors has continued to decline, U.S. agencies have succeeded in using a partner’s pending export license for U.S. HEU or expected change in inventory of U.S. special nuclear material as leverage for a U.S. interagency physical protection visit. For example, we identified two cases since 2000 where a partner country applied for a license to transfer U.S. HEU, and a U.S. interagency team subsequently visited those two sites. In addition, we identified a recent situation where a partner country’s inventory of U.S. plutonium at a certain site was expected to significantly increase, and a U.S. interagency team visited the site to determine whether the site could adequately protect these additional inventories. According to DOE officials, requests for U.S. low enriched uranium (LEU) export licenses have increased in recent years. In response, DOE officials told us that U.S. agencies have begun to prioritize visits to countries making such requests, and our review of agency documentation corroborates this. For example, physical protection visit records we reviewed state that recent interagency physical protection visits were made to two sites to evaluate the facilities’ physical security in advance of pending U.S. LEU license applications. In addition, a DOE contractor and State official told us that a U.S. team planned to visit another partner country site in late 2011 in order to verify the adequacy of physical protection for U.S.-obligated LEU. DOE, NRC, and State do not have a formal process for coordinating and prioritizing U.S. interagency physical protection visits. In particular, DOE, which has the technical lead and is the agency lead on most visits has neither (1) worked with NRC and State to establish a plan and prioritize interagency physical protection visits, nor (2) measured performance in a systematic way. Specifically:  Establishing a plan and prioritizing and coordinating efforts. A U.S. agency formal plan for which countries or facilities to visit has not been established, nor have goals for the monitoring and evaluation activities been formalized. In October 2009, DOE reported to us that it had formulated a list of countries that contained U.S. nuclear material and were priorities for U.S. teams to visit. However, in a subsequent written communication to us, a senior DOE official stated that DOE had not yet discussed this list with State, NRC, or other agency officials. As a result, the list of countries had not been properly vetted at that time and did not represent an interagency agreed-upon list. In February 2011, DOE officials told us that U.S. agencies will be considering a revised methodology for prioritizing physical protection visits. NRC officials told us they thought the interagency coordination and prioritization of the visit process could be improved. A State official, who regularly participates in the U.S. physical protection visits, told us that interagency coordination had improved in the past 6 months, in response to a recognized need by U.S. agencies to be prepared for an expected increase in requests for exports of U.S. LEU.  Measuring performance. The agencies have not developed performance metrics to gauge progress in achieving stated goals related to physical protection visits. Specifically, DOE, NRC, and State have not performed an analysis to determine whether the stated interagency goal of visiting countries containing U.S. Category I nuclear material within 5 years has been met. In addition, although DOE has stated U.S. physical protection teams revisit sites whenever there is an indication that security does not meet IAEA security guidelines, DOE has not quantified its efforts in a meaningful way. In response to our questions about metrics, DOE officials stated that there is no U.S. law regarding the frequency of visits or revisits and that the agency’s internal goals are not requirements. These officials told us that DOE, NRC, and State recognize that the “number one goal” is to ensure the physical security of U.S. nuclear material abroad. DOE officials stated that the best measure of the U.S. physical protection visits’ effectiveness is that there has not been a theft of U.S. nuclear material from a foreign facility since the 1970s, when two LEU fuel rods were stolen from a certain country. However, officials reported to us that, in 1990, the facility was determined to be well below IAEA security guidelines. Our review of DOE documentation shows that other U.S. LEU transferred to the facility remains at the site. In July 2011, in conjunction with the classification review for this report, DOE officials stated that while DOE, NRC, and State work together on coordinating U.S. government positions regarding priorities and procedures for the interagency physical protection program, no updated document exists that formalizes the process for planning, coordinating, and prioritizing U.S. interagency physical protection visits. We note that the documents that DOE refers to are internal DOE documents presented to us in 2008 and 2009 in response to questions regarding nuclear cooperation agreements. These documents are not an interagency agreed-upon document, but reflect DOE’s views on determining which countries and facilities interagency physical protection teams should visit. Further, DOE officials in July 2011 stated that DOE, NRC, and State do not have an agreed-upon way to measure performance in a systematic way, and that while the goals for the monitoring and evaluation activities have not yet been formalized through necessary updated documents, a prioritized list of countries to visit does exist. These officials noted that the U.S. government is working to update its planning documents and is examining its methodology for prioritizing physical protection visits. Any changes will be included in these updated documents. DOE and U.S. agencies’ activities for prioritizing and coordinating U.S. interagency physical protection visits and measuring performance do not meet our best practices for agency performance or DOE’s standards for internal control. We have reported that defining the mission and desired outcomes, measuring performance, and using performance information to identify performance gaps are critical if agencies are to be accountable for achieving intended results. In addition, DOE’s own standards for internal control call for “processes for planning, organizing, directing, and controlling operations designed to reasonably assure that programs achieve intended results… and decisions are based on reliable data.” However, DOE, NRC, and State have neither established a plan nor measured performance to determine whether they are meeting internal goals and whether U.S. agencies’ activities are systematic. U.S. agencies have not systematically evaluated the security of foreign facilities holding U.S. nuclear material in two key ways. First, U.S. interagency physical protection teams have not systematically visited countries holding Category I quantities of U.S. nuclear material. Second, interagency teams have not revisited sites that did not meet IAEA security guidelines in a timely manner. U.S. interagency physical protection teams have not systematically visited countries believed to be holding Category I quantities of U.S. special nuclear material at least once every 5 years—a key programmatic goal. In a December 2008 document, DOE officials noted that, in 1987, NRC recommended that countries possessing Category I nuclear material be revisited at least once every 5 years. This recommendation was adopted as a goal for determining the frequency of follow-on visits. In addition, DOE, NRC, and State officials told us that they aim to conduct physical protection visits at each country holding Category I quantities of U.S. nuclear material at least once every 5 years. We evaluated U.S. agencies’ performance at meeting this goal by reviewing records of U.S. physical protection visits and other information. We found that the United States had met this goal with respect to two countries by conducting physical protection visits at least once every 5 years since 1987 while they held Category I quantities of U.S. nuclear material. However, we estimated that 21 countries held Category I amounts of U.S. nuclear material during the period from 1987 through 2010 but were not visited once every 5 years while they held such quantities of U.S. nuclear material. In addition, U.S. interagency physical protection teams have not visited all partner facilities believed to contain Category I quantities of U.S. special nuclear material to determine whether the security measures in place meet IAEA security guidelines. Specifically, we reviewed physical protection visit records and NMMSS data and identified 12 facilities that NMMSS records indicate received Category I quantities of U.S. HEU that interagency physical protection teams have never visited. We identified four additional facilities that GTRI officials told us currently hold, and will continue to hold, Category I quantities of U.S. special nuclear material for which there is no acceptable disposition path in the United States. In addition, these facilities have not been visited by a U.S. interagency physical protection team, according to our review of available documentation. Moreover, U.S. interagency physical protection teams have not systematically visited partner storage facilities for U.S. nuclear material. The AEA, as amended, requires that U.S. nuclear cooperation agreements contain a stipulation giving the United States approval rights over any storage facility containing U.S. unirradiated or “separated” plutonium or HEU. DOE and NRC officials told us there is no list of such storage facilities besides those listed in a U.S. nuclear cooperation agreement with a certain partner. They stated—and our review of available documents corroborated—that a number of the U.S. physical protection visits have included assessments of overseas storage sites for U.S. nuclear material, since such sites are often collocated with research reactors. However, our review also found two instances where partner storage areas containing U.S. HEU or separated plutonium did not meet IAEA guidelines or were identified as potentially vulnerable. DOE and U.S. agencies do not have a systematic process to revisit or monitor security improvements at facilities that do not meet IAEA security guidelines. Based on our analysis of available documentation, we found that, since 1994, U.S. interagency physical protection teams determined that partner country sites did not meet IAEA security guidelines on 21 visits. We then examined how long it took for a U.S. team to revisit the sites that did not meet IAEA security guidelines and found that, in 13 of 21 cases, U.S. interagency teams took 5 years or longer to revisit the facilities. According to DOE, NRC, and State officials, the interagency physical protection visits are not the only way to determine whether partner facilities are meeting IAEA security guidelines. For example, the United States is able to rely on information provided by other visits and U.S. embassy staff to monitor physical security practices. These visits include DOE-only trips and trips by DOE national laboratory staff and NRC physical protection experts who worked with the host country to improve physical security at the sites. NRC officials also stated that, in some cases, the partner’s corrective actions at the site are verified by U.S. officials stationed in the country, and a repeat physical protection visit is not always required. IAEA officials told us that U.S. technical experts often participate in voluntary IAEA physical security assessments at IAEA member states’ facilities. Specifically, IAEA created the International Physical Protection Advisory Service (IPPAS) to assist IAEA member states in strengthening their national security regime. At the request of a member state, IAEA assembles a team of international experts who assess the member state’s system of physical protection in accordance with IAEA security guidelines. As of December 2010, 49 IPPAS missions spanning about 30 countries had been completed. DOE has taken steps to improve security at a number of facilities overseas that hold U.S. nuclear material. DOE’s GTRI program removes nuclear material from vulnerable facilities overseas and has achieved a number of successes. However, DOE faces a number of constraints. Specifically, GTRI can only bring certain types of nuclear material back to the United States that have an approved disposition pathway and meet the program’s eligibility criteria. In addition, obtaining access to the partner facilities to make physical security improvements may be difficult. There are a few countries that are special cases where the likelihood of returning the U.S. nuclear material to the United States is considered doubtful. DOE’s Office of Nonproliferation and International Security and GTRI officials told us that when a foreign facility with U.S.-obligated nuclear material does not meet IAEA security guidelines, the U.S. government’s first response is to work with the partner country to encourage physical security improvements. In addition, the GTRI program was established in 2004 to identify, secure, and remove vulnerable nuclear material at civilian sites around the world and to provide physical protection upgrades at nuclear facilities that are (1) outside the former Soviet Union, (2) in non-weapon states, and (3) not in high-income countries. According to GTRI officials, the U.S. government’s strategy for working with partner countries to improve physical security includes: (1) encouraging high-income countries to fund their own physical protection upgrades with recommendations by the U.S. government and (2) working with other- than-high-income countries to provide technical expertise and funding to implement physical protection upgrades. If the material is excess to the country’s needs and can be returned to the United States under an approved disposition pathway, GTRI will work with the country to repatriate the material. According to GTRI officials, GTRI was originally authorized to remove to the United States, under its U.S. fuel return program, only U.S.-obligated fresh and spent HEU in Material Test Reactor fuel, and Training Research Isotope General Atomics (TRIGA) fuel rod form. According to GTRI officials, GTRI has also obtained the authorization to return additional forms of U.S. fresh and spent HEU, as well as U.S. plutonium from foreign countries, so long as there is no alternative disposition path. The material must (1) pose a threat to national security, (2) be usable for an improvised nuclear device, (3) present a high-risk of terrorist theft, and (4) meet U.S. acceptance criteria. To date, GTRI has removed more than 1,240 kilograms of U.S. HEU from Australia, Argentina, Austria, Belgium, Brazil, Canada, Chile, Colombia, Denmark, Germany, Greece, Japan, the Netherlands, Philippines, Portugal, Romania, Slovenia, South Korea, Spain, Sweden, Switzerland, Taiwan, Thailand, and Turkey. It has also performed security upgrades at reactors containing U.S. nuclear material that were not meeting IAEA security guidelines in 10 partner countries. As we reported in September 2009, GTRI has improved the security of research reactors, and GTRI officials told us in April 2011 that they plan to continue to engage other countries to upgrade security. In a separate report published in December 2010, we noted that GTRI has assisted in the conversion from the use of HEU to LEU or verified the shutdown of 72 HEU research reactors around the world, 52 of which previously used U.S. HEU. GTRI prioritizes its schedule for upgrading the security of research reactors and removing nuclear material based on the amount and type of nuclear material at the reactor and other threat factors, such as the vulnerability of facilities, country-level threat, and proximity to strategic assets. Our review identified several situations where GTRI or its predecessor program removed vulnerable U.S. nuclear material. Notwithstanding these successes, the GTRI program has some limitations. GTRI cannot remove all potentially vulnerable nuclear material worldwide because the program’s scope is limited to only certain types of material that meet the eligibility criteria. GTRI officials told us that, of the approximately 17,500 kilograms of HEU it estimates was exported from the United States, the majority—12,400 kilograms—is currently not eligible for return to the United States. According to GTRI officials, over 10,000 kilograms is contained in fuels from “special purpose” reactors that are not included in GTRI’s nuclear material return program because they were not traditional aluminum-based fuels, TRIGA fuels, or target material. As a result, this material does not have an acceptable disposition pathway in the United States, according to GTRI officials. GTRI officials stated that these reactors are in Germany, France, and Japan, and that the material has been deemed to be adequately protected. GTRI reported that the other approximately 2,000 kilograms of transferred U.S. nuclear material is located primarily in EURATOM member countries and is either currently in use or adequately protected. In addition, the potential vulnerability of nuclear material at certain high- income facilities was raised to us by officials at the National Security Council (NSC)—the President’s principal forum for considering national security and foreign policy matters—and included in a prior report. Specifically, we reported that, there may be security vulnerabilities in certain high-income countries, including three specific high-income countries named by the NSC officials. For sites in these countries, GTRI officials told us the U.S. government’s strategy is to work bilaterally with the countries and to provide recommendations to improve physical protection, and follow up as needed. Our analysis of available agency physical protection visit documents also raises concerns regarding the physical security conditions in these countries, including facilities that did not meet IAEA security guidelines and interagency physical protection teams’ lack of access issues. DOE also works with countries to remove material if it is in excess of the country’s needs and meets DOE acceptance criteria. The ability of DOE to return U.S. nuclear material depends, however, on the willingness of the foreign country to cooperate. As we reported in September 2009, because GTRI’s program for physical security upgrades and nuclear material returns is voluntary, DOE faces some challenges in obtaining consistent and timely cooperation from other countries to address security weaknesses. Our report further noted that DOE has experienced situations where a foreign government has refused its assistance to make security upgrades. For example, we reported that one country had refused offers of DOE physical security upgrades at a research reactor for 9 years. However, this situation was subsequently resolved when all HEU was removed from this country, according to GTRI officials. In addition, we reported that DOE had experienced two other situations where the partner country would not accept security assistance until agreements with the United States were reached on other issues related to nuclear energy and security. There are several countries that have U.S. nuclear material that are particularly problematic and represent special cases. Specifically, U.S. nuclear material has remained at sites in three countries where physical protection measures are unknown or have not been visited by an interagency physical protection team in decades. GTRI removed a large quantity of U.S. spent HEU recently from one of these countries. According to NRC and State officials, U.S. transfers to these three countries were made prior to 1978, when the physical protection requirements were added to the AEA. Therefore, these countries have not made the same commitments regarding physical security of U.S.- transferred material. Finally, we identified another country that poses special challenges. All U.S-obligated HEU has been removed from this country, which was one of the GTRI program’s highest priorities. Previous U.S. interagency physical protection visits found a site in this country did not meet IAEA security guidelines. The world today is dramatically different than when most U.S. nuclear cooperation agreements were negotiated. Many new threats have emerged, and nuclear proliferation risks have increased significantly. We recognize that the United States and its partners share a strong common interest in deterring and preventing the misuse of U.S. nuclear material— or any nuclear material—and that flexibility in the agreements is necessary to forge strong and cooperative working relationships with our partners. The fundamental question, in our view, is whether nuclear cooperation agreements and their underlying legislative underpinnings need to be reassessed given the weaknesses in inventory management and physical security that we identified. Specifically, we found these agreements may not be sufficiently robust in two areas—inventories and physical security. Without an accurate inventory of U.S. nuclear materials—in particular, weapon-usable HEU and separated plutonium—the United States does not have sufficient assurances regarding the location of materials. As a result, the United States may not be able to monitor whether the partner country is appropriately notifying the United States and whether the United States is appropriately and fully exercising its rights of approval regarding the transfer, retransfer, enrichment and reprocessing and, in some cases, storage of nuclear materials subject to the agreement terms. NRC and multiple offices within DOE could not provide us with an authoritative list of the amount, location, and disposition of U.S. HEU or separated plutonium overseas. We are particularly concerned that NRC and DOE could not account, in response to a 1992 mandate by Congress, on the location and disposition of U.S. nuclear material overseas—and that they have not developed such an inventory in the almost two decades since that mandate. We recognize that physical security is a national responsibility. We also recognize that neither the AEA, as amended, nor the U.S. nuclear cooperation agreements in force require that State negotiate new or renewed nuclear cooperation agreement terms that include specific access rights for the United States to verify whether a partner is maintaining adequate physical security of U.S. nuclear material. Without such rights, it may be difficult for the United States to have access to critical facilities overseas—especially those believed to be holding weapon-usable materials—to better ensure that U.S. material is in fact adequately protected while the material remains in the partner’s custody. We note the agreements are reciprocal, with both parties generally agreeing to all conditions specified in them. We acknowledge that any change to the nuclear cooperation framework or authorizing legislation will be very sensitive. Careful consideration should be given to the impact of any reciprocity clauses on U.S. national security when negotiating or reviewing these agreements. However, it may be possible to do so in a way that includes greater access to critical facilities where weapon-usable U.S. nuclear material is stored, without infringing on the sovereign rights of our partners or hampering the ability of the U.S. nuclear industry to remain competitive. In the course of our work, we identified several weaknesses in DOE, NRC, and State’s efforts to develop and manage activities that ensure that U.S. nuclear cooperation agreements are properly implemented. Specifically, the lack of a baseline inventory of U.S. nuclear materials—in particular, weapon-usable materials—and annual inventory reconciliations with all partners limits the ability of the U.S. government to identify where the material is located. Currently, annual reconciliations with five partners are undertaken. However, the information, with the exception of one country, is aggregated and not provided on a facility-by-facility basis. Without such information on facilities, it may be difficult to track U.S. material for accounting and control purposes. No annual reconciliations currently exist for the United States’ other partners that it has transferred material to or trades with. The NMMSS database could be the official central repository of data regarding U.S. inventories of nuclear material overseas if DOE and NRC are able to collect better data. We are concerned that DOE has not worked with NRC and State to develop a systematic process for monitoring and evaluating the physical security of U.S. nuclear material overseas, including which foreign facilities to visit for future physical protection visits. In particular, U.S. interagency physical protection teams have neither met a key programmatic goal for visiting countries containing Category I quantities of U.S. special nuclear material every 5 years, nor have they visited all partner facilities believed to be holding Category I quantities of U.S. nuclear material, nor revisited facilities that were found to not meet IAEA security guidelines in a timely manner. Moreover, relying on reported thefts of U.S. nuclear material as a gauge of security is not the best measure of program effectiveness when accounting processes for inventory of U.S. material at foreign facilities are limited. Improving the U.S. government’s management of nuclear cooperation agreements could contribute to the administration achieving its goal of securing all vulnerable nuclear material worldwide in 4 years.  Congress may wish to consider directing DOE and NRC to complete a full accounting of U.S. weapon-usable nuclear materials—in particular, HEU and separated plutonium—with its nuclear cooperation agreement partners and other countries that may possess such U.S. nuclear material. In addition, Congress may wish to consider amending the AEA if State, working with other U.S. agencies, does not include enhanced measures regarding physical protection access rights in future agreements and renewed agreements, so that U.S. interagency physical protection teams may obtain access when necessary to verify that U.S. nuclear materials have adequate physical protection. The amendment could provide that the U.S. government may not enter into nuclear cooperation agreements unless such agreements contain provisions allowing the United States to verify that adequate physical security is exercised over nuclear material subject to the terms of these agreements. We are making seven recommendations to enable agencies to better account for, and ensure the physical protection of, U.S. nuclear material overseas. To help federal agencies better understand where U.S. nuclear material is currently located overseas, we recommend that the Secretary of State, working with the Secretary of Energy and the Chairman of the Nuclear Regulatory Commission, take the following four actions to strengthen controls over U.S. nuclear material subject to these agreements:  determine, for those partners with which the United States has transferred material but does not have annual inventory reconciliation, a baseline inventory of weapon-usable U.S. nuclear material, and establish a process for conducting annual reconciliations of inventories of nuclear material on a facility-by-facility basis;  establish for those partners with which the United States has an annual inventory reconciliation, reporting on a facility-by-facility basis for weapon-usable material where possible; facilitate visits to sites that U.S. physical protection teams have not visited that are believed to be holding U.S. Category I nuclear material; and seek to include measures that provide for physical protection access rights in new or renewed nuclear cooperation agreements so that U.S. interagency physical protection teams may in the future obtain access when necessary to verify that U.S. nuclear materials are adequately protected. Careful consideration should be given to the impact of any reciprocity clauses on U.S. national security when negotiating or reviewing these agreements. In addition, we recommend that the Secretary of Energy, working with the Secretary of State, and the Chairman of the Nuclear Regulatory Commission take the following three actions:  develop an official central repository to maintain data regarding U.S. inventories of nuclear material overseas. This repository could be the NMMSS database, or if the U.S. agencies so determine, some other official database;  develop formal goals for and a systematic process to determine which foreign facilities to visit for future interagency physical protection visits. The goals and process should be formalized and agreed to by all relevant agencies; and  periodically review performance in meeting key programmatic goals for the physical protection program, including determining which countries containing Category I U.S. nuclear material have been visited within the last 5 years, as well as determining whether partner facilities previously found to not meet IAEA security guidelines were revisited in a timely manner. We provided a draft of this report to the Secretaries of Energy and State, and the Chairman of the NRC for their review and comment. Each agency provided written comments on the draft report, which are presented in appendixes IV, VI, and V, respectively. All three agencies generally disagreed with our conclusions and recommendations. DOE, NRC, and State disagreed with GAO in three general areas of the report. Specifically, all the agencies (1) disagree with our recommendations to establish annual inventory reconciliations with all trading partners and establish a system to comprehensively track and account for U.S. nuclear material overseas, because the agencies believe this is impractical and unwarranted; (2) maintain that IAEA safeguards are sufficient or an important tool to account for U.S. nuclear material overseas; and (3) assert that any requirement in future nuclear cooperation agreements calling for enhanced physical protection access rights is unnecessary and could hamper sensitive relationships. With regard to the three general areas of disagreement, our response is as follows:  DOE, NRC, and State assert that it is not necessary to implement GAO’s recommendation that agencies undertake an annual inventory reconciliation and report on a facility-by-facility basis for weapon- usable material where possible for all countries that hold U.S.- obligated nuclear material. We stand by this recommendation for numerous reasons. First, as stated in the report, we found—and none of the agencies refuted—that the U.S. government does not have an inventory of U.S. nuclear material overseas and, in particular, is not able to identify where weapon-usable materials such as HEU and separated plutonium that can be used for a nuclear weapon may reside. In fact, NRC commented that “inventory knowledge is very important for high-consequence materials, e.g., high enriched uranium and separated plutonium.” Because DOE, NRC, and State do not have comprehensive knowledge of where U.S.-obligated material is located at foreign facilities, it is unknown whether the United States is appropriately and fully exercising its rights of approval regarding the transfer, retransfer, enrichment, and reprocessing and, in some cases, storage of nuclear materials subject to the agreements’ terms. In addition, the lack of inventory information hampers U.S. agencies in identifying priorities for interagency physical protection visits. We are particularly concerned that NRC and DOE, in response to a 1992 mandate by Congress, could only account for the location and disposition of about 1,160 kilograms out of an estimated 17,500 kilograms of U.S.-exported HEU. Furthermore, the agencies have not developed such an inventory or performed an additional comprehensive review in the almost two decades since that mandate. We believe it is important that DOE, NRC, and State pursue all means possible to better identify where U.S.-obligated material is located overseas—and for weapon-usable HEU and separated plutonium, seek to do so on a facility-by-facility basis. Annual inventory reconciliations with all partners provide one way to do that. The United States has demonstrated it has the ability to conduct such exchanges, which none of the agencies disputed. Our report notes that the United States conducts annual inventory reconciliations with five partners, including one where facility-level information is annually exchanged. We believe the recent signing of nuclear cooperation agreements with India and Russia, as well as the situation where current partners whose agreements are set to expire in coming years must be renegotiated—including Peru and South Korea—provide a convenient and timely opportunity for DOE, NRC, and State to pursue such enhanced material accountancy measures.  DOE, NRC, and State commented that IAEA’s comprehensive safeguards program is another tool to maintain the knowledge of locations of nuclear material in a country, including U.S.-obligated material, and that IAEA inspection, surveillance, and reporting processes are effective tools for material tracking and accounting. We agree that IAEA safeguards are an important nuclear nonproliferation mechanism. However, our report found IAEA’s safeguards have a limited ability to identify, track, and account for U.S.-obligated material. Specifically, as our report notes, and as confirmed to us by senior IAEA officials, IAEA does not track the obligation of the nuclear material under safeguards and, therefore, IAEA may not have the ability to identify whether and what volume of nuclear material at partner country facilities is U.S.-obligated and subject to the terms of U.S. nuclear cooperation agreements. In addition, our report notes that IAEA considers member country nuclear material inventory information confidential and does not share it with its member countries, including the United States. Therefore, IAEA has a limited ability to account for nuclear material subject to the terms of U.S. nuclear cooperation agreements. Importantly, safeguards are not a substitute for physical security and serve a different function. As our report notes, safeguards are primarily a way to detect diversion of nuclear material from peaceful to military purposes but do not ensure that facilities are physically secure to prevent theft or sabotage of such material.  DOE, NRC, and State disagreed with our recommendation that State, working with DOE and NRC, should seek to negotiate terms that include enhanced measures regarding physical protection access rights in future and renewed agreements. They also raised concerns with our Matter for Congressional Consideration to amend the AEA should State not implement our recommendation. We do not agree with agencies’ comments that our recommendation that agencies “seek to include” such measures is impractical. As we note in our report, an enhanced measure for access rights is in place in the recently negotiated U.S.-India arrangements and procedures document. Further, while partner countries pledge at the outset of an agreement that they will physically protect U.S.- obligated material, the results of our work show that they have not always adequately done so. Specifically, our report noted that, of the 55 interagency physical protection visits made from 1994 through 2010, interagency teams found that countries met IAEA security guidelines on only 27 visits; did not meet IAEA security guidelines on 21 visits, and the results of 7 visits are unknown because the U.S. team was unable to assess the sites or agency documentation of the physical protection visits was missing. In addition, we identified 12 facilities that are believed to have or previously had Category I U.S. nuclear material that have not been visited by an interagency physical protection team. We agree with the agencies’ comments that the licensing process for U.S. nuclear material offers some assurances that physical security will be maintained and that an exchange of diplomatic notes at the time of a transfer is designed to ensure the partners maintain the material according to the terms of the agreements. However, these measures are implemented at the time of licensing or material transfer, and insight into the physical security arrangements of the nuclear material over the longer-term, often 30-year duration of these agreements is by no means guaranteed. Ensuring that the United States has the tools it needs to visit facilities in the future—even after an initial transfer of material is made per a conditional export license—is important to supporting U.S. nuclear nonproliferation objectives. We continue to believe that our recommendation and Matter for Congressional Consideration are consistent with the report’s findings and would enhance the security of U.S.-obligated nuclear material in other countries. In addition, DOE and NRC commented that (1) our report contained errors in fact and judgment, (2) our report’s recommendations could result in foreign partners requiring reciprocal access rights to U.S. facilities that contain nuclear material that they transferred to the United States, which could have national security implications, and (3) our recommendation that agencies establish a process for conducting annual reconciliations of inventories of nuclear material and develop a repository to maintain data regarding U.S. inventories of nuclear material overseas would be costly to implement. Our response to these comments is as follows:  None of the agencies’ comments caused us to change any factual statement we made in the report. DOE provided a limited number of technical comments, which we incorporated as appropriate. Importantly, some of the facts that agencies did not dispute included: (1) our analysis that found U.S. agencies made only a single attempt to comprehensively account for transferred U.S. HEU almost 20 years ago and, at that time, were only able to verify the amount and location of less than one-tenth of transferred U.S. HEU; and (2) partner countries did not meet IAEA physical security guidelines for protecting U.S. nuclear material in about half of the cases we reviewed from 1994 through 2010. In our view, these security weaknesses place U.S.-obligated nuclear material at risk and raise potential proliferation concerns. These agreements for nuclear cooperation are long-term in scope and are often in force for 30 years or more. As we noted in our report, the world today is dramatically different than the time when most of the agreements were negotiated. New threats have emerged, and nuclear proliferation risks have increased significantly. NRC commented that countries may not want to change the “status quo” as it pertains to nuclear cooperation agreement terms, including those regarding the physical protection of U.S.-obligated nuclear material. In our view, the status quo, or business-as-usual approach should not apply to matters related to the security of U.S.-obligated nuclear material located at partner facilities throughout the world. Moreover, implementing a more robust security regime is consistent with and complements the administration’s goal of securing all vulnerable nuclear material worldwide within a 4-year period.  DOE and NRC’s comment that the United States may be asked to demonstrate reciprocity by nuclear cooperation agreement partners to verify that adequate physical protection is being provided to their nuclear material while in U.S. custody has merit and needs to be taken into consideration when developing or reviewing nuclear cooperation agreements. As a result, we added language to the conclusions and recommendation sections to additionally state that “careful consideration should be given to the impact of any reciprocity clauses on U.S. national security when negotiating or reviewing these agreements.” In addition, DOE and NRC commented that we are suggesting a costly new effort in recommending that agencies account for and track U.S.-obligated nuclear material overseas. However, we noted in our report that NMMSS officials told us that NMMSS is currently capable of maintaining information regarding inventories of U.S. nuclear material overseas. Moreover, DOE and NRC did not conduct an analysis to support their assertion that such a system would be costly. Although we did not perform a cost-benefit analysis, based on our conversations with NMMSS staff and the lack of a DOE cost-benefit analysis, to the contrary, there is no evidence to suggest that adding additional information to the NMMSS database would necessarily entail significant incremental costs or administrative overhead. We are sensitive to suggesting or recommending new requirements on federal agencies that may impose additional costs. However, it is important to note that the U.S. government has already spent billions of dollars to secure nuclear materials overseas, as well as radiation detection equipment to detect possible smuggled nuclear material at our borders and the border crossings of other countries. The administration intends to spend hundreds of millions more to support the president’s 4-year goal to secure all vulnerable nuclear material worldwide. If necessary, an expenditure of some resources to account for U.S. nuclear material overseas is worthy of consideration. We stand by our recommendations that State work with nuclear cooperation agreement partners that the United States has transferred material to, to develop a baseline inventory of U.S. nuclear material overseas, and that DOE work with other federal agencies to develop a central repository to maintain data regarding U.S. inventories of nuclear material overseas. DOE disagreed with our findings that the U.S. interagency physical protection visit program (1) lacked formal goals, and that (2) U.S. agencies have not established a formal process for coordinating and prioritizing interagency physical protection visits, in addition to the three areas of general disagreement. During the course of our work, we found no evidence of an interagency agreed-upon list of program goals. In its comments, DOE stated that the formal goal of the program is to determine whether U.S.-obligated nuclear material at the partner country facility is being protected according to the intent of IAEA security guidelines. This is the first time the goal has been articulated to us as such. Moreover, we disagree with DOE’s second assertion that it has established a formal process for coordinating and prioritizing visits. Our report notes that we found DOE has not (1) worked with NRC and State to establish a plan and prioritize U.S. physical protection visits or (2) measured performance in a systematic way. In particular, our report notes that, in October 2009, a DOE Office of Nonproliferation and International Security official reported to us that it had formulated a list of 10 countries that contained U.S. nuclear material and were priorities for physical protection teams to visit. However, a senior-level DOE nonproliferation official told us that DOE had not discussed this list with State or NRC, or other agency officials, and it could not be considered an interagency agreed-upon list. In addition, NRC Office of International Program officials told us they thought interagency coordination could be improved, and a State Bureau of International Security and Nonproliferation official told us that agency coordination has improved in the past 6 months. Moreover, as we further state in the report, in February 2011, DOE officials told us that the department is conducting a study of its methodology for prioritizing physical protection visits. In addition, in July 2011, in conjunction with the classification review for this report, DOE officials stated that while DOE, NRC, and State work together on coordinating U.S. government positions regarding priorities and procedures for the interagency physical protection program, no updated document exists that formalizes the process for planning, coordinating, and prioritizing U.S. interagency physical protection visits. We note that the documents that DOE refers to are internal DOE documents presented to GAO in 2008 and 2009 in response to questions regarding nuclear cooperation agreements. These documents are not an interagency agreed upon document, but reflects DOE’s views on determining which countries and facilities interagency physical protection teams should visit. Further, DOE officials in July 2011 stated that DOE, NRC, and State do not have an agreed-upon way to measure performance in a systematic way, and that while the goals for the monitoring and evaluation activities have not yet been formalized through necessary updated documents, a prioritized list of countries to visit does exist. These officials noted that the U.S. government is working to update its planning documents and examining its methodology for prioritizing physical protection visits. Any changes will be included in these updated documents. Therefore, we continue to believe that DOE should work with the other agencies to develop formal goals for and a systematic process for determining which foreign facilities to visit for future physical protection visits, and that the process should be formalized and agreed to by all agencies. NRC commented that in order to demonstrate that U.S. nuclear material located abroad is potentially insecure, GAO made an assessment based on U.S. agencies not conducting activities which are, according to NRC, neither authorized nor required by U.S. law or by agreements negotiated under Section 123 of the AEA. In fact, we acknowledge that U.S. agencies are not required to conduct certain activities or collect certain information. Moreover, we do not suggest that agencies undertake activities that are not authorized by law. We recommend that the agencies either expand upon and refine outreach they are already conducting, contingent on the willingness of our cooperation agreement partners, or negotiate new terms in nuclear cooperation agreements as necessary. If the agencies find that they are unable to negotiate new terms we recommend that Congress consider amending the AEA to require such terms. State commented that determining annual inventories and reconciliations of nuclear material, as well as establishing enhanced facility-by-facility reporting for those partners with which the United States already has an annual inventory reconciliation is a DOE function, not a State function. We agree that DOE plays a vital role in carrying out these activities— once such bilaterally agreed upon measures are in place. However, we believe it is appropriate to recommend that the Department of State—as the agency with the lead role in any negotiation regarding the terms and conditions of U.S. nuclear cooperation agreements—work with DOE and NRC to secure these measures with all U.S. partners. State also commented that there is a cost to the U.S. nuclear industry in terms of lost competitiveness should the requirements in U.S. nuclear cooperation agreements be strengthened to include better access to critical facilities for U.S. interagency physical protection teams. State provided no further information to support this point. Our report acknowledges that any change to the nuclear cooperation framework or authorizing legislation will be very sensitive and that flexibility in the agreements is necessary. We also stated that it may be possible to change the framework of agreements in a way that does not hamper the ability of the U.S. nuclear industry to remain competitive. While we would not want to alter these agreements in such a way that our nuclear industry is put at a competitive disadvantage, in our view, the security of U.S. nuclear material overseas should never be compromised to achieve a commercial goal. Finally, State asserted that interagency physical protection teams have been granted access to every site they have requested under the consultation terms of U.S. nuclear cooperation agreements. As a result, State believes the provisions of the current agreements are adequate. As we note in our report, access to partner facilities is not explicitly spelled out in the agreements and, in our view, this is a limitation for the U.S. agencies in obtaining timely and systematic access to partner nuclear facilities. While State may be technically correct that access has been granted, our report clearly shows that many sites believed to contain Category I quantities of U.S. nuclear material have been visited only after lengthy periods of time, or have not been visited at all. We continue to believe that enhanced physical protection access measures could help interagency teams ensure that they are able to visit sites containing U.S. nuclear material in a timely, systematic, and comprehensive fashion. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Energy and State, the Chairman of the Nuclear Regulatory Commission, and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. We addressed the following questions during our review: (1) assess U.S. agency efforts to account for U.S. nuclear material overseas, (2) assess the Department of Energy’s (DOE) and other U.S. agencies’ efforts to monitor and evaluate the physical security conditions of U.S. nuclear material subject to the terms of nuclear cooperation agreements, and (3) describe DOE’s activities to secure or remove potentially vulnerable U.S. nuclear material at partner facilities. To assess U.S. agency efforts to account for U.S. nuclear material overseas, we reviewed relevant statutes, including the Atomic Energy Act of 1954 (AEA), as amended, as well as the texts of all current nuclear cooperation agreements. We obtained data from the Nuclear Materials Management and Safeguards System (NMMSS), a database jointly run by DOE and the Nuclear Regulatory Commission (NRC), which, among other things, maintains data on U.S. peaceful use exports and retransfers of enriched uranium and plutonium that have occurred since 1950, and reviewed DOE and GAO reviews of the NMMSS database. To assess the reliability of data in the NMMSS database, we interviewed officials from DOE and NRC and a former DOE contractor to identify any limitations in NMMSS’s data on the location and status of U.S. material overseas and found these data to be sufficiently reliable for the purposes of accounting for U.S. exports of nuclear material. We compared NMMSS data with other official and unofficial DOE sources of information regarding U.S. nuclear material transfers, including DOE data on nuclear material returns, to determine the reliability of DOE’s inventory data for U.S. nuclear material transferred overseas. We reviewed DOE, NRC, and other U.S. agency records and interviewed officials at those agencies to determine the extent to which DOE, NRC, and State are able to identify where U.S. nuclear material was exported, retransferred, and is currently held. We selected a non-probability sample of partners based on, among other considerations, quantities of U.S. special nuclear material transferred to them. Results of interviews of non-probability samples are not generalizeable to all partners but provide an understanding of those partners’ views of the U.S. government’s efforts to account for its nuclear material inventories overseas subject to nuclear cooperation agreement terms. We conducted site visits in four countries holding U.S.-obligated material and interviewed governmental officials and nuclear facility operators in these countries to discuss material accounting procedures. Further, we interviewed officials from five partners regarding their observations about working with the U.S. government to account for material subject to the terms of nuclear cooperation agreements. We analyzed the texts of administrative arrangements with key countries to determine the extent to which DOE conducts inventory reconciliations of inventory transferred between the United States and a partner country. To assess DOE’s and other U.S. agencies’ efforts to monitor and evaluate the physical security conditions of U.S. nuclear material overseas subject to nuclear cooperation agreement terms and describe DOE’s activities to secure or remove potentially vulnerable U.S. nuclear material at partner facilities, we reviewed all U.S. nuclear cooperation agreements in force, as well as other U.S. statutes, and IAEA’s security guidelines, “The Physical Protection of Nuclear Material and Nuclear Facilities,” INFCIRC/225/Rev.4, and other relevant international conventions to determine the extent to which such laws and international conventions provide for DOE and U.S. agencies to monitor and evaluate the physical security of transferred U.S. nuclear material subject to U.S. nuclear cooperation agreement terms. We interviewed officials from DOE, NRC, and the Department of State (State) to gain insights into how effective their efforts are, and how their efforts might be improved. We selected a nonprobability sample of partners based on, among other considerations, quantities of U.S. special nuclear material transferred to them and interviewed officials to determine how DOE and other U.S. agencies work with partner countries to exchange views on physical security and the process by which U.S. nuclear material is returned to the United States. Results of interviews of non-probability samples are not generalizeable to all partners but provide an understanding of those partners’ views of the U.S. government’s efforts to monitor and evaluate the physical security conditions of U.S. nuclear material overseas subject to nuclear cooperation agreement terms. We also obtained and analyzed the records of all available U.S. physical protection visits to partner facilities from 1974 through 2010. We reviewed agency documents and interviewed officials from DOE, NRC, and State regarding the policies and procedures for determining which partners to visit, how they conducted physical protection visits at partner facilities, and mechanisms for following up on the results of these visits. In particular, we compared the sites visited with NMMSS records of U.S. material exported and retransferred, and other information to evaluate the extent to which U.S. physical protection visits were made to all sites overseas containing U.S. special nuclear material. We obtained written responses from Global Threat Reduction Initiative (GTRI), and reviewed other information regarding their program activities. To better understand IAEA’s role in maintaining safeguards and evaluating physical security measures, we interviewed IAEA officials and reviewed relevant documents. We conducted this performance audit from September 2010 to June 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The United States currently has 27 agreements in force for peaceful nuclear cooperation with foreign countries, the European Atomic Energy Community (EURATOM), the International Atomic Energy Agency (IAEA), and Taiwan. Figure 1 shows the partner countries with which the United States currently has or previously had a nuclear cooperation agreement with. As indicated in figure 1, the United States has nuclear cooperation agreements in force with Argentina, Australia, Bangladesh, Brazil, Canada, China, Colombia, EURATOM, Egypt, India, Indonesia, IAEA, Japan, Kazakhstan, Morocco, Norway, Peru, Russia, South Africa, South Korea, Switzerland, Taiwan, Thailand, Turkey, Ukraine, and United Arab Emirates. In addition, the United States previously had nuclear cooperation agreements with Chile, Dominican Republic, Iran, Israel, Lebanon, New Zealand, Pakistan, Philippines, Uruguay, Venezuela, and Vietnam. In addition to the individual named above, Glen Levis, Assistant Director; Antoinette Capaccio; Julia Coulter; Michelle Munn; and Alison O’Neill made key contributions to this report.
The United States has exported special nuclear material, including enriched uranium, and source material such as natural uranium under nuclear cooperation agreements. The United States has 27 nuclear cooperation agreements for peaceful civilian cooperation. Under the U.S. Atomic Energy Act of 1954 (AEA), as amended, partners are required to guarantee the physical protection of U.S. nuclear material. GAO was asked to (1) assess U.S. agency efforts to account for U.S. nuclear material overseas, (2) assess the Department of Energy's (DOE) and U.S. agencies' efforts to evaluate the security of U.S. material overseas, and (3) describe DOE's activities to secure or remove potentially vulnerable U.S. nuclear material at partner facilities. GAO analyzed agency records and interviewed DOE, Nuclear Regulatory Commission (NRC), Department of State (State), and partner country officials. This report summarizes GAO's classified report issued in June 2011. DOE, NRC, and State are not able to fully account for U.S. nuclear material overseas that is subject to nuclear cooperation agreement terms because the agreements do not stipulate systematic reporting of such information, and there is no U.S. policy to pursue or obtain such information. U.S. nuclear cooperation agreements generally require that partners report inventory information upon request, however, DOE and NRC have not systematically sought such data. DOE and NRC do not have a comprehensive, detailed, current inventory of U.S. nuclear material--including weapon-usable material such as highly enriched uranium (HEU) and separated plutonium--overseas that includes the country, facility, and quantity of material. In addition, NRC and DOE could not fully account for the current location and disposition of U.S. HEU overseas in response to a 1992 congressional mandate. U.S. agencies, in a 1993 report produced in response to the mandate, were able to verify the location of 1,160 kilograms out of 17,500 kilograms of U.S. HEU estimated to have been exported. DOE, NRC, and State have established annual inventory reconciliations with five U.S. partners, but not the others it has transferred material to or trades with. Nuclear cooperation agreements do not contain specific access rights that enable DOE, NRC, or State to monitor and evaluate the physical security of U.S. nuclear material overseas, and the United States relies on its partners to maintain adequate security. In the absence of access rights, DOE's Office of Nonproliferation and International Security, NRC, and State have conducted physical protection visits to monitor and evaluate the physical security of U.S. nuclear material at facilities overseas when permitted. However, the agencies have not systematically visited countries believed to be holding the highest proliferation risk quantities of U.S. nuclear material, or systematically revisited facilities not meeting international physical security guidelines in a timely manner. Of the 55 visits made from 1994 through 2010, U.S. teams found that countries met international security guidelines approximately 50 percent of the time. DOE has taken steps to improve security at a number of facilities overseas that hold U.S. nuclear material but faces constraints. DOE's Global Threat Reduction Initiative (GTRI) removes U.S. nuclear material from vulnerable facilities overseas but can only bring back materials that have an approved disposition pathway and meet the program's eligibility criteria. GTRI officials told GAO that, of the approximately 17,500 kilograms of HEU exported from the United States, 12,400 kilograms are currently not eligible for return to the United States. Specifically, GTRI reported that over 10,000 kilograms of U.S. HEU are believed to be in fuels from reactors in Germany, France, and Japan that have no disposition pathways in the United States and are adequately protected. In addition, according to GTRI, 2,000 kilograms of transferred U.S. HEU are located primarily in European Atomic Energy Community countries and are currently in use or adequately protected. GAO suggests, among other things, that Congress consider directing DOE and NRC to compile an inventory of U.S. nuclear material overseas. DOE, NRC, and State generally disagreed with GAO's recommendations, including that they conduct annual inventory reconciliations with all partners, stating they were unnecessary. GAO continues to believe that its recommendations could help improve the accountability of U.S. nuclear material in foreign countries.
You are an expert at summarizing long articles. Proceed to summarize the following text: The GPD program is one of six housing programs for homeless veterans administered by the Veterans Health Administration, which also undertakes outreach efforts and provides medical treatment for homeless veterans. VA officials told us in fiscal year 2007 they spent about $95 million on the GPD program to support two basic types of grants—capital grants to pay for the buildings that house homeless veterans and per diem grants for the day-to-day operational expenses. Capital grants cover up to 65 percent of housing acquisition, construction, or renovation costs. The per diem grants pay a fixed dollar amount for each day an authorized bed is occupied by an eligible veteran up to the maximum number of beds allowed by the grant—in 2007 the amount cannot exceed $31.30 per person per day. VA pays providers after they have housed the veteran, on a cost reimbursement basis. Reimbursement may be lower for providers whose costs are lower or are offset by funds for the same purpose from other sources. Through a network of over 300 local providers, consisting of nonprofit or public agencies, the GPD program offers beds to homeless veterans in settings free of drugs and alcohol that are supervised 24 hours a day, 7 days a week. Most GPD providers have 50 or fewer beds available, with the majority of providers having 25 or fewer. Program rules generally allow veterans to stay with a single GPD provider for 2 years, but extensions may be granted when permanent housing has not been located or the veteran requires additional time to prepare for independent living. Providers, however, have the flexibility to set shorter time frames. In addition, veterans are generally limited to a total of three stays in the program over their lifetime, but local VA liaisons may waive this limitation under certain circumstances. The program’s goals are to help homeless veterans achieve residential stability, increase their income or skill levels, and attain greater self-determination. To meet VA’s minimum eligibility requirements for the program, individuals must be veterans and must be homeless. A veteran is an individual discharged or released from active military service. The GPD program excludes individuals with a dishonorable discharge, but it may accept veterans with shorter military service than required of veterans who seek VA health care. A homeless individual is a person who lacks a fixed, regular, adequate nighttime residence and instead stays at night in a shelter, institution, or public or private place not designed for regular sleeping accommodations. GPD providers determine if potential participants are homeless, but local VA liaisons determine if potential participants meet the program’s definition of veteran. VA liaisons are also responsible for determining whether veterans have exceeded their lifetime limit of three stays in the GPD program and for issuing a waiver to that rule when appropriate. Prospective GPD providers may identify additional eligibility requirements in their grant documents. While program policies are developed at the national level by VA program staff, the local VA liaisons designated by VA medical centers have primary responsibility for communicating with GPD providers in their area. VA reported that in fiscal year 2007, there were funds to support 122 full-time liaisons. Since fiscal year 2000, VA has quadrupled the number of available beds and significantly increased the number of admissions of homeless veterans to the GPD program in order to address some of the needs identified through the its annual survey of homeless veterans. In fiscal year 2006, VA estimated that on a given night, about 196,000 veterans were homeless and an additional 11,100 transitional beds were needed to meet homeless veterans’ needs. However, this need was to be met through the combined efforts of the GPD program and other federal, state, or community programs that serve the homeless. VA had the capacity to house about 8,200 veterans on any given night in the GPD program. Over the course of the year, because some veterans completed the program in a matter of months and others left before completion, VA was able to admit about 15,400 veterans into the program, as shown in figure 1. Despite VA rules allowing stays of up to 2 years, veterans remained in the GPD program an average of 3 to 5 months in fiscal year 2006. The need for transitional housing beds continues to exceed capacity, according to VA’s annual survey of local areas served by VA medical centers. The number of transitional beds available nationwide from all sources increased to 40,600 in fiscal year 2006, but the need for beds increased as well. As a result, VA estimates that about 11,100 more beds are needed to serve the homeless, as shown in table 1. VA officials told us that they expect to increase the bed capacity of the GPD program to provide some of the needed beds. Most homeless veterans in the program had struggled with alcohol, drug, medical or mental health problems before they entered the program. Over 40 percent of homeless veterans seen by VA had served during the Vietnam era, and most of the remaining homeless veterans served after that war, including at least 4,000 who served in military or peacekeeping operations in the Persian Gulf, Afghanistan, Iraq, and other areas since 1990. About 50 percent of homeless veterans were between 45 and 54 years old, with 30 percent older and 20 percent younger. African-Americans were disproportionately represented at 46 percent, the same percentage as non- Hispanic whites. Almost all homeless veterans were men, and about 76 percent of veterans were either divorced or never married. An increasing number of homeless women veterans and veterans with dependents are in need of transitional housing according to VA officials and GPD providers we visited. The GPD providers told us in 2006 that women veterans had sought transitional housing; some recent admissions had dependents; and a few of their beds were occupied by the children of veterans, for whom VA could not provide reimbursement. VA officials said that they may have to reconsider the type of housing and services that they are providing with GPD funds in the future, but currently they provide additional funding in the form of special needs grants to a few GPD programs to serve homeless women veterans. VA’s grant process encourages collaboration between GPD providers and other service organizations. Addressing homelessness—particularly when it is compounded by substance abuse and mental illness—is a challenge involving a broad array of services that must be coordinated. To encourage collaboration, VA’s grants process awards points to prospective GPD providers who demonstrate in their grant documents that they have relationships with groups such as local homeless networks, community mental health or substance abuse agencies, VA medical centers, and ancillary programs. The grant documents must also specify how providers will deliver services to meet the program’s three goals—residential stability, increased skill level or income, and greater self-determination. The GPD providers we visited often collaborated with VA, local service organizations, and other state and federal programs to offer the broad array of services needed to help veterans achieve the three goals of the GPD program. Several providers worked with the local homeless networks to identify permanent housing resources, and others sought federal housing funds to build single-room occupancy units for temporary use until more permanent long-term housing could be developed. All providers we visited tried to help veterans obtain financial benefits or employment. Some had staff who assessed a veteran’s potential eligibility for public benefits such as food stamps, Supplemental Security Income, or Social Security Disability Insurance. Other providers relied on relationships with local or state officials to provide this assessment, such as county veterans’ service officers who reviewed veterans’ eligibility for state and federal benefits or employment representatives who assisted with job searches, training, and other employment issues. GPD providers also worked collaboratively to provide health care-related services—such as mental health and substance abuse treatment, and family and nutritional counseling. While several programs used their own staff or their partners’ staff to provide mental health or substance abuse services and counseling directly, some GPD providers referred veterans off site— typically, to a VA local medical center. Despite GPD providers’ efforts to collaborate and leverage resources, GPD providers and VA staff noted gaps in key services and resources, particularly affordable permanent housing for veterans ready to leave the GPD program. Providers also identified lack of transportation, legal assistance, affordable dental care, and immediate access to substance abuse treatment facilities as obstacles for transitioning veterans out of homelessness. VA staff in some of the GPD locations we visited told us that transportation issues made it difficult for veterans to get to medical appointments or employment-related activities. While one GPD provider we visited was able to overcome transportation challenges by partnering with the local transit company to obtain subsidies for homeless veterans, transportation remained an issue for GPD providers that could not easily access VA medical centers by public transit. Providers said difficulty in obtaining legal assistance to resolve issues related to criminal records or credit problems presented challenges in helping veterans obtain jobs or permanent housing. In addition, some providers expressed concerns about obtaining affordable dental care and about wait lists for veterans referred to VA for substance abuse treatment. We found that some providers and staff did not fully understand certain GPD program policies—which in some cases may have affected veterans’ ability to get care. For instance, providers did not always have an accurate understanding of the eligibility requirements and program stay rules, despite VA’s efforts to communicate its program rules to GPD providers and VA liaisons who implement the program. Some providers were told incorrectly that veterans could not participate in the GPD program unless they were eligible for VA health care. Several providers understood the lifetime limit of three GPD stays but may not have known or believed that VA had the authority to waive this rule. As a consequence, we recommended that VA take steps to ensure that its policies are understood by the staff and providers with responsibility for implementing them. In response to our recommendation that VA take steps to ensure that its policies are understood by the staff and providers with responsibility for implementing them, VA took several steps in 2007 to improve communications with VA liaisons and GPD providers, such as calling new providers to explain policies and summarizing their regular quarterly conference calls on a new Web site, along with new or updated manuals. Language on the number and length of allowable stays in the providers’ guide has not changed, however. VA assesses performance in two ways—the outcomes for veterans at the time they leave the program and the performance of individual GPD providers. VA’s data show that since 2000, a generally steady or increasing percentage of veterans met each of the program’s three goals at the time they left the GPD program. Since 2000, proportionately more veterans are leaving the program with housing or with a better handle on their substance abuse or health issues. During 2006, over half of veterans obtained independent housing when they left the GPD program, and another quarter were in transitional housing programs, halfway houses, hospitals, nursing homes, or similar forms of secured housing. Nearly one-third of veterans had jobs, mostly on a full-time basis, when they left the GPD program. One-quarter were receiving VA benefits when they left the GPD program, and one-fifth were receiving other public benefits such as Supplemental Security Income. Significant percentages also demonstrated progress in handling alcohol, drug, mental health, or medical problems and overcoming deficits in social or vocational skills. For example, 67 percent of veterans admitted with substance problems showed progress in handling these problems by the time they left. Table 2 indicates the numbers or percentages involved. VA’s Office of Inspector General (OIG) found when it visited GPD providers in 2005-2006 that VA officials had not been consistently monitoring the GPD providers’ annual performance as required. The GPD program office has since moved to enforce the requirement that VA liaisons review GPD providers’ performance when the VA team comes on- site each year to inspect the GPD facility. To assess the veterans’ success, VA has relied chiefly on measures of veterans’ status at the time they leave the GPD program rather than obtaining routine information on their status months or years later. In part, this has been due to concerns about the costs, benefits, and feasibility of more extensive follow-up. However, VA completed a onetime study in January 2007 that a VA official told us cost about $1.5 million. The study looked at the experience of a sample of 520 veterans who participated in the GPD program in five geographic locations, including 360 who responded to interviews a year after they had left the program. Generally, the findings confirm that veterans’ status at the time they leave the program can be maintained. We recommended that VA explore feasible and cost-effective ways to obtain information on how veterans are faring after they leave the program. We suggested that where possible they could use data from GPD providers and other VA sources, such as VA’s own follow-up health assessments and GPD providers’ follow-up information on the circumstances of veterans 3 to 12 months later. VA concurred and told us in 2007 that VA’s Northeast Program Evaluation Center is piloting a new form to be completed electronically by VA liaisons for every veteran leaving the GPD program. The form asks for the veterans’ employment and housing status, as well as involvement, if any, in substance abuse treatment, 1 month after they have left the program. While following up at 1 month is a step in the right direction, additional information at a later point would yield a better indication of longer-term success. Mr. Chairman, this concludes my remarks. I would be happy to answer any questions that you or other members of the subcommittee may have. For further information, please contact Daniel Bertoni at (202) 512-7215. Also contributing to this statement were Shelia Drake, Pat Elston, Lise Levie, Nyree M. Ryder, and Charles Willson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Subcommittee on Health of the Committee on Veterans' Affairs asked GAO to discuss its recent work on the Department of Veterans Affairs' (VA) Homeless Providers Grant and Per Diem (GPD) program. GAO reported on this subject in September 2006, focusing on (1) VA's estimates of the number of homeless veterans and transitional housing beds, (2) the extent of collaboration involved in the provision of GPD and related services, and (3) VA's assessment of program performance. VA estimates that about 196,000 veterans nationwide were homeless on a given night in 2006, based on its annual survey, and that the number of transitional beds available through VA and other organizations was not sufficient to meet the needs of eligible veterans. The GPD program has quadrupled its capacity to provide transitional housing for homeless veterans since 2000, and additional growth is planned. As the GPD program continues to grow, VA and its providers are also grappling with how to accommodate the needs of the changing homeless veteran population that will include increasing numbers of women and veterans with dependents. The GPD providers we visited collaborated with VA, local service organizations, and other state and federal programs to offer a broad array of services designed to help veterans achieve the three goals of the GPD program--residential stability, increased skills or income, and greater self-determination. However, most GPD providers noted key service and communication gaps that included difficulties obtaining affordable permanent housing and knowing with certainty which veterans were eligible for the program, how long they could stay, and when exceptions were possible. VA data showed that many veterans leaving the GPD program were better off in several ways--over half had successfully arranged independent housing, nearly one-third had jobs, one-quarter were receiving benefits, and significant percentages showed progress with substance abuse, mental health or medical problems or demonstrated greater self-determination in other ways. Some information on how veterans fare after they leave the program was available from a onetime follow-up study of 520 program participants, but such data are not routinely collected. We recommended that VA take steps to ensure that GPD policies and procedures are consistently understood and to explore feasible means of obtaining information about the circumstances of veterans after they leave the GPD program. VA concurred and, following our review, has taken several steps to improve communications and to develop a process to track veterans' progress shortly after they leave the program. However following up at a later point might yield a better indication of success.
You are an expert at summarizing long articles. Proceed to summarize the following text: The federal Food Stamp Program is intended to help low-income individuals and families obtain a more nutritious diet by supplementing their income with benefits to purchase nutritious food such as meat, dairy products, fruits, and vegetables, but not items such as soap, tobacco, or alcohol. The Food and Nutrition Service (FNS) pays the full cost of food stamp benefits and shares the states’ administrative costs—with FNS usually paying approximately 50 percent—and is responsible for promulgating program regulations and ensuring that state officials administer the program in compliance with program rules. The states administer the program by determining whether households meet the program’s income and asset requirements, calculating monthly benefits for qualified households, and issuing benefits to participants on an electronic benefits transfer card. In fiscal year 2005, the Food Stamp Program issued almost $28.6 billion in benefits to about 25.7 million individuals participating in the program, and the maximum monthly food stamp benefit for a household of four living in the continental United States was $506. As shown in figure 1, program participation increased sharply from 2000 to 2005 following a substantial decline, and the number of food stamp recipients follows the trend in the number of people living at or below the federal poverty level. In addition to the economic growth in the late 1990s, another factor contributing to the decrease in number of participants from 1996 to 2001 was the passage of the Personal Responsibility and Work Opportunity Act of 1996 (PRWORA), which toughened eligibility criteria and had the effect of untethering food stamps from cash assistance. Since 2000, that downward trend has reversed, and stakeholders believe that the downturn in the U.S. economy, coupled with changes in the program’s rules and administration, has led to an increase in the number of food stamp participants. Eligibility for participation in the Food Stamp Program is based on the Department of Health and Human Services’ poverty measures for households. The caseworker must first determine the household’s gross income, which cannot exceed 130 percent of the poverty level for that year (or about $1,799 per month for a family of three living in the contiguous United States in fiscal year 2007). Then the caseworker must determine the household’s net income, which cannot exceed 100 percent of the poverty level (or about $1,384 per month for a family of three living in the contiguous United States in fiscal year 2007). Net income is determined by deducting from gross income expenses such as dependent care costs, medical expenses, utilities costs, and shelter expenses. In addition, there is a limit of $2,000 in household assets, and basic program rules limit the value of vehicles an applicant can own and still be eligible for the program. If the household owns a vehicle worth more than $4,650, the excess value is included in calculating the household’s assets. FNS and the states share responsibility for implementing an extensive quality control (QC) system used to measure the accuracy of Food Stamp payments and from which state and national error rates are determined. Under FNS’s quality control system, the states calculate their payment errors by drawing a statistical sample to determine whether participating households received the correct benefit amount. The state’s error rate is determined by weighting the dollars paid in error divided by the state’s total issuance of food stamp benefits. Once the error rates are final, FNS is required to compare each state’s performance with the national error rate and imposes penalties or provides incentives according to specifications in law. The Farm Security and Rural Investment Act of 2002 (the 2002 Farm Bill) changed the Food Stamp Program’s quality control system by making only those states with persistently high error rates face liabilities. The 2002 Farm Bill also provided for $48 million in bonuses each year to be awarded to states with high or most improved performance, including actions taken to correct errors, reduce error rates, improve eligibility determinations, and other indicators of effective administration as approved by the Secretary of Agriculture. Every year, food stamp recipients exchange hundreds of millions of dollars in benefits for cash instead of food with authorized retailers across the country, a practice known as trafficking. In a typical trafficking situation, a retailer gives a food stamp recipient a discounted amount of cash—commonly 50 cents on the dollar—in exchange for food stamp benefits and pockets the difference. By trafficking, retailers commit fraud and undermine the primary purpose of the program, which is to help provide food to low-income individuals and families. Recipients who traffic deprive themselves and their families of the intended nutritional benefits. FNS has the primary responsibility for authorizing retailers to participate in the Food Stamp Program, monitoring their compliance with requirements, and administratively disqualifying those who are found to have trafficked food stamp benefits. At the end of fiscal year 2005, more than 160,000 retailers were authorized to accept food stamp benefits. Supermarkets account for only about 22 percent of the authorized stores but redeem the lion’s share (about 86 percent) of food stamp benefits. To become an authorized retailer, a store must offer on a continuing basis a variety of foods in each of the four staple food categories—meats, poultry or fish; breads or cereals; vegetables or fruits; and dairy products—or 50 percent of its sales must be in a staple group such as meat or bakery items. However, the regulations do not specify how many food items retailers should stock. The store owner submits an application and includes forms of identification such as copies of the owner’s Social Security card, driver’s license, business license, liquor license, and alien resident card. The FNS field office program specialist then checks the applicant’s Social Security number against FNS’s database of retailers, the Store Tracking and Redemption System, to see if the applicant has previously been sanctioned in the Food Stamp Program. The application also collects information on the type of business, store hours, number of employees, number of cash registers, the types of staple foods offered, and the estimated annual amount of gross sales and eligible food stamp sales. PRWORA required each state agency to implement an EBT system to electronically distribute food stamp benefits, and the last state completed its implementation in fiscal year 2004. Prior to EBT, recipients used highly negotiable food stamp coupons to pay for allowable foods. Under the EBT system, food stamp recipients receive an EBT card imprinted with their name and a personal account number, and food stamp benefits are automatically credited to the recipients’ accounts once a month. In a legitimate food stamp transaction, recipients run their EBT card, which works much like a debit card, through an electronic point-of-sale machine at the grocery checkout counter, and enter their secret personal identification number to access their food stamp accounts. This authorizes the transfer of food stamp benefits from a federal account to the retailer’s account to pay for the eligible food items. The legitimate transaction contrasts with a trafficking transaction in which recipients swipe their EBT card, but instead of buying groceries, they receive a discounted amount of cash and the retailer pockets the difference. In addition to approving retailers to participate in the program, FNS has the primary responsibility for monitoring their compliance with requirements and administratively disqualifying those who are found to have trafficked food stamp benefits. FNS headquarters officials collect and monitor EBT transaction data to detect suspicious patterns of transactions by retailers. They then send any leads to FNS program specialists in the field office who either work the cases themselves or refer them to undercover investigators in the Retailer Investigations Branch to pursue by attempting to traffic food stamps for cash. The national payment error rate for the Food Stamp Program combines states’ overpayments and underpayments to program participants and has declined by about 40 percent, from 9.86 percent in 1999 to a record low of 5.84 percent in 2005, in a time of increasing participation. FNS and the states we reviewed have taken many approaches to improving food stamp payment accuracy, most of which are parallel with internal control practices known to reduce improper payments. Despite this progress, improper food stamp payments continue to account for a large amount of money—about $1.7 billion in 2005— and similar error rate reductions may prove challenging given that the program remains complex. The national payment error rate for the Food Stamp Program combines states’ overpayments and underpayments to program participants and has declined by about 40 percent over the last 7 years, from 9.86 percent in 1999 to 5.84 percent in 2005 in a time of increasing participation (see figure 2 below). If the 1999 error rate had been in effect in 2005, the program would have made payment errors totaling over $2.8 billion rather than the $1.7 billion it experienced. Improper payments can be in the form of overpayments or underpayments to food stamp recipients. In fiscal year 2005, food stamp payment errors totaled about $1.7 billion in benefits. This sum represents about 6 percent of the total $28.6 billion in benefits provided that year to a monthly average of 25.7 million low-income program participants. Of the total $1.7 billion in payment error in fiscal year 2005, $1.3 billion, or about 78 percent, were overpayments. Overpayments occur when eligible persons are provided more than they are entitled to receive or when ineligible persons are provided benefits. Underpayments, which occur when eligible persons are paid less than they are entitled to receive, totaled $374 million, or about 22 percent of dollars paid in error, in fiscal year 2005. Error rates fell in 41 states and the District of Columbia, and 18 states reduced their error rates by one-third or more between fiscal years 1999 and 2003. Further, the 5 states that issue the most food stamp benefits reduced their error rates by an average of 36 percent during this period. For example, Illinois’ error rate dropped from 14.79 in 1999 to 4.87 in 2003, and New York’s error rate dropped from 10.47 to 5.88 in those same years. In addition, 21 states had error rates below 6 percent in 2003; this is an improvement from 1999, when 7 states had error rates below 6 percent. However, payment error rates vary among states. Despite the decrease in many states’ error rates, some states continue to have high payment error rates. We found that almost two-thirds of the payment errors in the Food Stamp Program are caused by caseworkers, usually when they fail to act on new information or when they make mistakes when applying program rules, and one-third are caused by participants, when they unintentionally or intentionally do not report needed information or provide incomplete or incorrect information (see fig. 3). As shown below, 5 percent of participant-caused errors were referred for potential fraud investigations in fiscal year 2003. Program complexity and other factors, such as the lack of resources and staff turnover, can contribute to caseworker mistakes. Despite the decrease in error rate in recent years, these factors remained the key causes of payment error between 1999 and 2003. We also found that income-related errors account for more than half of all payment errors. Prticipnt ced error (35%) Caseworker ced error (65%) We found that FNS and the states we reviewed have taken many approaches to increasing food stamp payment accuracy, most of which are parallel with internal control practices known to reduce improper payments. These include practices to improve accountability, perform risk assessments, implement changes based on such assessments, and monitor program performance. Often, several practices are tried simultaneously, making it difficult to determine which have been the most effective. States we reviewed adopted a combination of practices to prevent, minimize, and address payment accuracy problems, such as increasing the awareness of, and the accountability for, payment error; analyzing quality control data to identify causes of common payment errors and develop corrective actions; making automated system changes to prompt workers to obtain complete documentation from clients; developing specialized change units that focus on acting upon reported verifying the accuracy of benefit payments calculated by state food stamp workers through supervisory and other types of case file reviews. For example, in California, state and local officials employed a combination of practices under each internal control component over the last several years to bring about their improved error rate. State officials reported expanding state oversight, hiring a contractor to perform assessments and provide training to larger counties with higher error rates, preparing detailed error analyses, and implementation of a quality assurance case review system in Los Angeles County, which accounted for 40 percent of the state’s caseload. California state officials credit the adoption of a combination of approaches as the reason for the state’s dramatic error rate reduction from 17.37 percent in fiscal year 2001 to 6.38 in fiscal year 2005 as the number of cases increased. In addition, 47 states have adopted some form of simplified reporting, one of the options FNS and Congress made available to states, which has since been shown to have contributed to the reduction in the payment error rate. FNS and Congress made several options available to the states to simplify the application and reporting process. Under the simplified reporting rule issued in November 2000 and expanded under the 2002 Farm Bill, most households need only report changes between certification periods if their new household income exceeds 130 percent of the federal poverty level. This simplified reporting option can reduce a state’s error rate by minimizing the number of income changes that must be reported between certifications and thereby reducing errors associated with caseworker failure to act as well as participant failure to report changes. FNS has taken several steps to increase payment accuracy, such as using its quality control system to provide sanctions and incentives to encourage states to reduce their payment error rates, tracking the success of state initiatives, and providing information needed to facilitate program improvement. FNS has long focused its attention on states’ accountability for error rates through its QC system by assessing penalties and providing financial incentives. The administration of the QC process and its system of performance bonuses and sanctions is credited as being the single largest motivator of program behavior. In fiscal year 2005, 8 states were found to be in jeopardy of being penalized if their fiscal year 2006 error rates do not improve. Some states have expressed concern that they may improve their error rates and yet still be penalized because the national rate continues to drop around them. In addition, under its new performance bonus system, each fiscal year FNS has awarded a total of $48 million to states, including $24 million to states with the lowest and most improved error rates and $6 million to states with the lowest and most improved negative error rate. FNS has also taken many actions to track the success of improvement initiatives and to provide the information needed to facilitate program improvement. FNS managers and regional office staff use QC data to monitor states’ performance over time, conduct annual reviews of state operations, and where applicable, monitor the states’ implementation of corrective action plans. FNS, in turn, requires states to perform management evaluations to monitor whether adequate corrective action plans are in place at local offices to address the causes of persistent errors and deficiencies. In addition, in November of 2003, FNS created a Payment Accuracy Branch at the national level to work with FNS regions to suggest policy and program changes and to monitor state performance. The branch facilitates a National Payment Accuracy Workgroup with representatives from each FNS regional office and headquarters who use QC data to review and categorize state performance into one of three tiers. FNS has recommended a specific level of increasing intervention and monitoring approaches for each tier when error rates increase, and the FNS regional offices report to headquarters on both state actions and regional interventions quarterly. FNS also provides and facilitates the exchange of information gleaned from monitoring by publishing a periodic guide to highlight the practices states are using to sponsoring national and regional conferences and best practices seminars; training state QC staff; providing state policy training and policy interpretation and guidance; supporting adoption of program simplification options. Once promising state practices have been identified, FNS also provides funding to state and local food stamp officials to promote knowledge sharing of good practices. Despite the progress in reducing payment errors, future similar error rate reductions may prove challenging. The three major causes of errors have remained the same over time and are closely linked to the complexity of program rules and reporting requirements. As long as eligibility requirements remain so detailed and complex, certain caseworker decisions will be at risk of error. Moreover, participant-caused errors, which constitute one-third of the overall national errors, are difficult to prevent and identify. Since the early 1990s, trafficking has declined by about 74 percent. FNS estimates that between 2002 and 2005, about $241 million in food stamp benefits was trafficked annually, or about 1.0 cent per dollar of benefits issued. Trafficking occurs more frequently in small convenience stores, and often, we found, between store owners and food stamp recipients with whom they were familiar. FNS has taken advantage of EBT and other new technology to improve its ability to detect trafficking and disqualify retailers who traffic, while law enforcement agencies have investigated and referred for prosecution a decreasing number of traffickers, instead focusing their efforts on fewer high-impact investigations. Despite the progress FNS has made in combating retailer trafficking, the Food Stamp Program remains vulnerable because retailers can enter the program intending to traffic and do so, often without fear of severe criminal penalties, as the declining number of investigations referred for prosecution suggests. The national rate of food stamp trafficking declined from about 3.8 cents per dollar of benefits redeemed in 1993 to about 1.0 cent per dollar during the years 2002 to 2005, as shown in table 1. Overall, the estimated rate of trafficking at small stores is much higher than the estimated rate for supermarkets and large groceries, which redeem most food stamp benefits. The rate of trafficking in small stores is an estimated 7.6 cents per dollar and an estimated 0.2 cents per dollar in large stores. With the implementation of EBT, FNS has supplemented its traditional undercover investigations by the Retailer Investigations Branch with cases developed by analyzing EBT transaction data. The nationwide implementation of EBT has given FNS powerful new tools to supplement its traditional undercover investigations of retailers suspected of trafficking food stamp benefits. FNS traditionally sent its investigators into stores numerous times over a period of months to attempt to traffic benefits. However, PRWORA gave FNS the authority to charge retailers with trafficking in cases based solely on EBT transaction evidence, called “paper cases.” A major advantage of paper cases is that they can be prepared relatively quickly and without multiple store visits. These EBT cases now account for more than half of the permanent disqualifications by FNS (see fig. 4). Although the number of trafficking disqualifications based on undercover investigations has declined, these investigations continue to play a key role in combating trafficking. However, as FNS’s ability to detect trafficking has improved, the number of suspected traffickers investigated by other federal entities, such as the USDA Inspector General and the U.S. Secret Service, has declined. These entities have focused more on a smaller number of high-impact investigations. As a result, retailers who traffic are less likely to face severe criminal penalties or prosecution. Despite the progress FNS has made in combating retailer trafficking, the Food Stamp Program remains vulnerable because retailers can enter the program intending to traffic and do so, often without fear of severe criminal penalties, as the declining number of investigations referred for prosecution suggests. FNS field office officials told us their first priority is getting stores into the program to ensure needy people have access to food, and therefore they sometimes authorize stores that stock limited food supplies but meet the minimum requirements in areas with few larger grocery stores. However, once authorized, some dishonest retailers do not maintain adequate food stock and focus more on trafficking food stamp benefits than on selling groceries, according to FNS officials, and 5 years may pass before FNS checks the stock again unless there is an indication of a problem with the store. Oversight of retailers’ entry into the program and early operations is important because newly authorized retailers can quickly ramp up the amount of food stamps they traffic, and there is no limit on the value of food stamps a retailer can redeem in 1 month. At one field office location where retailers are often innovative in their trafficking schemes, FNS officials noticed that some retailers quickly escalated their trafficking within 2 to 3 months after their initial authorization. As shown in figure 5, one disqualified retailer’s case file we reviewed at that field office showed the store went from $500 in monthly food stamp redemptions to almost $200,000 within 6 months. Redemption activity dropped precipitously after the trafficking charge letter was sent to the retailer in late October of 2004. In its application for food stamp authorization, this retailer estimated he would have $180,000 of total annual food sales, yet the retailer was redeeming more than that each month in food stamp benefits before being caught in a Retailer Investigations Branch investigation. FNS has made good use of EBT transaction data. However, FNS has not conducted the analyses to identify high risk areas and to target their compliance-monitoring resources to the areas of highest risk. For example, our analysis of FNS’s database of retailers showed that of the 9,808 stores permanently disqualified from the Food Stamp Program, about 35 percent were in just 4 states: New York, Illinois, Texas, and Florida, yet about 26 percent of food stamp recipients lived in those states. However, FNS headquarters officials did not know the number of program specialists in the field offices in these states who devote a portion of their time to monitoring food stamp transactions and initiating paper cases. In addition, some retailers and store locations have a history of program violations that lead up to permanent disqualifications, but FNS did not have a system in place to ensure these stores were quickly targeted for heightened attention. Our analysis showed that, of the 9,808 stores that had been permanently disqualified from the program, about 90 percent were disqualified for their first detected offense. However, 9.4 percent of the disqualified retailers had shown early indications of problems before being disqualified. About 4.3 percent of these retailers had received a civil money penalty, 4.3 percent had received a warning letter for program violations, and 0.8 percent had received a temporary disqualification. Most of these stores were small and may present a higher risk of future trafficking than others, yet FNS does not necessarily target them for speedy attention. Further, some store locations may be at risk of trafficking because a series of different owners had trafficked there. After an owner was disqualified, field office officials told us the store would reopen under new owners who continued to traffic with the store’s clientele. As table 2 shows, our analysis of FNS’s database of retailers found that about 174, or 1.8 percent, of the store addresses had a series of different owners over time who had been permanently disqualified for trafficking at that same location, totaling 369 separate disqualifications. In one case, a store in the District of Columbia had 10 different owners who were each disqualified for trafficking, consuming FNS’s limited compliance-monitoring resources. Our analysis of the data on these stores with multiple disqualified owners indicates that FNS officials found this type of trafficking in a handful of cities and states. Almost 60 percent of repeat store locations were in 6 states, and 44 percent were in 8 cities, often concentrated in small areas. For example, 14 repeat store locations were clustered in downtown areas of both Brooklyn and Baltimore. However, it is not clear whether these data indicate heightened efforts of compliance staff or whether trafficking is more common in these areas. Regardless, early monitoring of high-risk locations when stores change hands could be an efficient use of resources. In addition, states’ lack of focus can facilitate vendor trafficking. Paper cases often identify recipients suspected to have trafficked their food stamp benefits with a dishonest retailer, and some FNS field offices send a list of those recipients to the appropriate state. In response, some states actively pursue and disqualify these recipients. However, FNS field offices do not always send lists of suspected individual traffickers to states or counties administering the program, and not all states investigate the individuals on these lists. Instead of focusing on food stamp recipients who traffic their benefits, states are using their resources to focus on recipients who improperly collect benefits, according to FNS officials. This inaction by some states allows recipients suspected of trafficking to continue the practice, and such inaction also leaves a pool of recipients ready and willing to traffic their benefits as soon as a disqualified store reopens under new management. Finally, FNS penalties alone may not be sufficient to deter traffickers. The most severe FNS penalty that most traffickers face is disqualification from the program, and FNS must rely on other entities to conduct investigations that could lead to prosecution. For example, in the food-stamp-trafficking ramp-up case previously cited, this retailer redeemed almost $650,000 of food stamps over the course of 9 months before being disqualified from the program in November 2004. As of August 2006, there was no active investigation of this retailer. Improper food stamp payments and trafficking of benefits have declined in a time of rising participation, and although progress has been made, ensuring program integrity will continue to be a fundamental challenge facing the program. We found that payment error rates have declined substantially as FNS and states have taken steps to improve payment accuracy and that future reductions may prove challenging. Attention from top USDA management as well as continued support and assistance from FNS will likely continue to be important factors in further reductions. In addition, if error rates continue to decrease, this trend will continue to put pressure on states to improve because penalties are assessed using the state’s error rate as compared with the national average. We also found that FNS, using EBT data, has made significant progress in taking advantage of new opportunities to monitor and disqualify traffickers. However, a more focused effort to target and disqualify these stores could help FNS meet its continuing challenge of ensuring that stores are available and operating in areas of high need while still maintaining program integrity. Given the size of the Food Stamp Program, the costs to administer it, and the current federal budget deficit, achieving program goals more cost-effectively may become more important. FNS and the states will continue to face a challenge in balancing the goals of payment accuracy, increasing program participation rates, and the need to contain program costs. To reduce program vulnerabilities and better target its limited compliance- monitoring resources, we recommended in our October 2006 report on trafficking that FNS develop additional criteria to identify stores most likely to traffic; conduct risk assessments, using compliance and other data, to systematically identify stores and areas that meet these criteria, and allocate resources accordingly; and provide more targeted and early oversight of stores determined most likely to engage in trafficking. To provide further deterrence for trafficking, we recommended that FNS work to develop a strategy to increase the penalties for trafficking, working with the Inspector General as needed, and consider developing legislative proposals if the penalties entail additional authority. To promote state efforts to pursue recipients suspected of trafficking and thereby reduce the pool of recipient traffickers, we recommended that FNS ensure that FNS field offices report to states those recipients who are suspected of trafficking, and revisit the incentive structure to encourage states to investigate and take action against recipients who traffic. Department of Agriculture officials generally agreed with our findings, conclusions, and recommendations but raised a concern regarding our recommendations on more efficient use of their compliance-monitoring resources. They stated that they believe they do have a strategy for targeting resources through their use of EBT transaction data to identify suspicious transaction patterns. We believe that FNS has made good progress in its use of EBT transaction data. However, it is now at a point where it can begin to formulate more sophisticated analyses. For example, these analyses could combine EBT transaction data with other available data, such as information on stores with minimal inventory, to develop criteria to better and more quickly identify stores at risk of trafficking. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions that you or other members of the Committee may have. For future contacts regarding this testimony, I can be contacted at (202) 512-7215. Key contributors to this testimony were Diana Pietrowiak and Cathy Roark. Food Stamp Trafficking: FNS Could Enhance Program Integrity by Better Targeting Stores Likely to Traffic and Increasing Penalties. GAO-07-53. Washington, D.C.: October 13, 2006. Improper Payments: Federal and State Coordination Needed to Report National Improper Payment Estimates on Federal Programs. GAO-06-347. Washington, D.C.: April 14, 2006. Food Stamp Program: States Have Made Progress Reducing Payment Errors, and Further Challenges Remain. GAO-05-245. Washington, D.C.: May 5, 2005. Food Stamp Program: Farm Bill Options Ease Administrative Burden, but Opportunities Exist to Streamline Participant Reporting Rules among Programs. GAO-04-916. Washington, D.C.: September 16, 2004. Food Stamp Program: Steps Have Been Taken to Increase Participation of Working Families, but Better Tracking of Efforts Is Needed. GAO-04-346. Washington, D.C.: March 5, 2004. Financial Management: Coordinated Approach Needed to Address the Government’s Improper Payments Problems. GAO-02-749. Washington, D.C.: August 9, 2002. Food Stamp Program: States’ Use of Options and Waivers to Improve Program Administration and Promote Access. GAO-02-409. Washington, D.C.: February 22, 2002. Executive Guide: Strategies to Manage Improper Payments: Learning from Public and Private Sector Organizations. GAO-02-69G. Washington, D.C.: October 2001. Food Stamp Program: States Seek to Reduce Payment Errors and Program Complexity. GAO-01-272. Washington D.C.: January 19, 2001. Food Stamp Program: Better Use of Electronic Data Could Result in Disqualifying More Recipients Who Traffick Benefits. GAO/RCED-00-61. Washington D.C.: March 7, 2000. Food Assistance: Reducing the Trafficking of Food Stamp Benefits. GAO/T-RCED-00-250. Washington D.C.: July 19, 2000. Food Stamp Program: Information on Trafficking Food Stamp Benefits. GAO/RCED-98-77. Washington D.C.: March 26, 1998. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The U.S. Department of Agriculture's (USDA) Food Stamp Program is intended to help low-income individuals and families obtain a better diet by supplementing their income with benefits to purchase food. USDA's Food and Nutrition Service (FNS) and the states jointly implement the Food Stamp Program, which is to be reauthorized when it expires in fiscal year 2007. This testimony discusses our past work on two issues related to ensuring integrity of the program: (1) improper payments to food stamp participants, and (2) trafficking in food stamp benefits. This testimony is based on a May 2005 report on payment errors (GAO-05-245) and an October 2006 report on trafficking (GAO-07-53). For the payment error report, GAO analyzed program quality control data and interviewed program stakeholders, including state and local officials. For the trafficking report, GAO interviewed agency officials, visited field offices, conducted case file reviews, and analyzed data from the FNS retailer database. The national payment error rate for the Food Stamp Program combines states' overpayments and underpayments to program participants and has declined by about 40 percent between 1999 and 2005, from 9.86 percent to a record low of 5.84 percent, due in part to options made available to states that simplified program reporting rules. In 2005, the program made payment errors totaling about $1.7 billion. However, if the 1999 error rate was in effect in 2005, program payment errors would have been $1.1 billion higher. FNS and the states we reviewed have taken several steps to improve food stamp payment accuracy, most of which are consistent with internal control practices known to reduce improper payments. These include practices to improve accountability, perform risk assessments, implement changes based on such assessments, and monitor program performance. FNS estimates indicate that the national rate of food stamp trafficking declined from about 3.8 cents per dollar of benefits redeemed in 1993 to about 1.0 cent per dollar during the years 2002 to 2005 and that trafficking occurs more frequently in smaller stores. FNS has taken advantage of electronic benefit transfer and other new technology to improve its ability to detect trafficking and disqualify retailers who traffic. Law enforcement agencies have investigated and referred for prosecution a decreasing number of traffickers; they are instead focusing their efforts on fewer high-impact investigations. Despite the progress FNS has made in combating retailer trafficking, the Food Stamp Program remains vulnerable because retailers can enter the program intending to traffic and do so, often without fear of severe criminal penalties, as the declining number of investigations referred for prosecution suggests. While both payment errors and trafficking of benefits have declined in a time of rising participation, ensuring program integrity remains a fundamental challenge facing the Food Stamp Program. To reduce program vulnerabilities and ensure limited compliance-monitoring resources are used efficiently, GAO recommended in its October 2006 trafficking report that FNS take additional steps to target and provide early oversight of stores most likely to traffic; develop a strategy to increase penalties for trafficking, working with the Inspector General as needed; and promote state efforts to pursue recipients suspected of trafficking. FNS generally agreed with GAO's findings, conclusions, and recommendations. However, FNS believes it does have a strategy for targeting resources through their use of food stamp transaction data to identify suspicious transaction patterns. GAO believes that FNS has made good progress in its use of these transaction data; however, it is now at a point where it can begin to formulate more sophisticated analyses.
You are an expert at summarizing long articles. Proceed to summarize the following text: DOD’s current space network is comprised of constellations of satellites, ground-based systems, and associated terminals and receivers. Among other things, these assets are used to perform surveillance and intelligence functions; detect and warn of attacks; provide communication services to DOD and other government users; provide positioning and precise timing data to U.S. forces as well as other national security, civil, and commercial users; and counter elements of an adversary’s space system. DOD categorizes these assets into four space mission areas—each with specific operational functions. (See table 1 for a description of space mission areas, operational functions, and related examples of systems and activities.) The Air Force is the primary procurer and operator of space systems. For fiscal years 2002 through 2007, the Air Force is expected to spend about 86 percent of total programmed space funding of about $165 billion, whereas the Navy, the Army, and other Defense agencies are expected to spend about 8 percent, 3 percent, and 3 percent, respectively. The space surveillance network and other space control systems, some of which are classified, are currently helping to protect and defend space assets or are under development. For example, the Space-Based Surveillance System is being developed to provide a constellation of satellites and other initiatives that will improve the timeliness and fidelity of space situational awareness information. The Rapid Attack Identification and Reporting System, also under development, is expected to ultimately provide notification to Air Force Space Command of threats (radio frequency and laser) impinging upon the right of friendly forces to use space. DOD’s space control mission, which endeavors to protect and defend U.S. space assets, is becoming increasingly important. This importance was recognized by the Space Commission that was established by Congress in the National Defense Authorization Act for Fiscal Year 2000to assess a variety of management and organizational issues relating to space activities in support of U.S. national security. Principally: While the commission recognized that organization and management are important, the critical need is national leadership to elevate U.S. national security space interests on the national security agenda. A number of disparate space activities should be merged, organizations realigned, lines of communication opened, and policies modified to achieve greater responsibility and accountability. The relationship between the officials primarily responsible for national security space programs is critical to the development and deployment of space capabilities. Therefore, they should work closely and effectively together to set and maintain the course for national security space programs. Finally, the United States will require superior space capabilities and a cadre of military and civilian talent in science, engineering, and systems operations to remain the world’s leading space-faring nation. Among other things, the Space Commission emphasized the importance of increasing the visibility and accountability of space funding. It also recommended that DOD pursue modernization of aging space systems, enhance its command and control structure, and evolve the surveillance system from cataloging and tracking to a system that could provide space situational awareness. We recently reported on the status of implementation of the Space Commission recommendations. We found that DOD has decided to take actions related to 10 of the commission’s 13 recommendations, including organizational changes aimed at consolidating some activities, changing chains of command, and modifying policies to achieve greater responsibility and accountability. In addition, we have reported that Over the years, DOD’s space acquisition management approach has resulted in each of the services pursuing its own needs and priorities for space. This, in turn, has increased the risk that acquisitions will be redundant and not interoperable. Also, under this approach, there has also been no assurance that the services as a whole are satisfying the requirements of the U.S. Space Command to the maximum extent practicable. DOD continues to face cost and schedule growth for some of its larger, more complex space system acquisitions primarily as a result of not having knowledge on the maturity of necessary technology before entering product development. DOD is now undertaking a wide range of efforts to strengthen its ability to protect and defend space-based assets. Some of these are focused solely on the space control mission while others are broader efforts aimed at strengthening space-related capabilities. The changes are intended to elevate the importance of space within the department; promote greater coordination on space-related activities both within and outside the department, particularly within the intelligence community; reduce redundant systems and capabilities while promoting interoperability; and enable the department to better prioritize space-related activities. At the same time, DOD is making changes to its acquisition and oversight policies that will affect how space programs are developed and managed. Specifically, the U.S. Space Command is developing a space control strategy that is to outline objectives for space control over the next 20 years. Concurrently, DOD is developing a national security space plan that will lay down broader objectives and priorities for space-based programs. As the future executive agent for space, the Air Force created an office to develop and implement the national security space plan and has yet to finalize plans for the organizational realignment of the office of the National Security Space Architect. The National Security Space Architect is responsible for developing architectures—frameworks that identify sets of capabilities—across the full range of DOD and intelligence community space mission areas. In addition, DOD is making changes to its budgeting process to gain greater visibility over space-related spending and has created a “virtual” space major force program for the purpose of identifying what funding is specifically directed toward space efforts. The virtual major force program identifies spending on space activities within other major force programs. This does not change the current process that the military services use to fund their own space programs, but it does aggregate space funding so that the department will be able to compare space funding to DOD’s total budget and conduct future trend analyses. Moreover, DOD will be able to identify space control funding from other space-related activities. Lastly, DOD has made changes to its acquisition policy that will affect how space systems are acquired and managed. These changes focus on making sure technologies are demonstrated at a high level of maturity before beginning product development as well as taking an evolutionary, or phased, approach for producing a system. The Air Force is also implementing a new acquisition oversight mechanism for space intended to streamline the time it takes to review and approve a program before moving onto a subsequent stage of development. Table 2 describes some of DOD’s efforts related to strengthening space control in more detail. DOD’s efforts to strengthen its management and organization of space activities, including space control, are a good step forward, particularly because they seek to promote better coordination among the services involved in space, prioritization of space-related projects, visibility over funding, and interoperability. But there are substantial planning and acquisition challenges involved in making DOD’s current space control efforts successful. The Space Commission recognized that stronger DOD-wide leadership and increased accountability were essential to developing a coherent space program. As noted above, one effort to provide stronger leadership and accountability is the development of a space control strategy. Completion of this strategy is a considerable challenge for DOD because it has not yet been aligned with other strategies still being revised and because agreement among the military services on specific roles, responsibilities, priorities, milestones, and end states may prove difficult to achieve. In February 2001, a draft of the space control strategy, prepared by U.S. Space Command, was submitted to the Chairman of the Joint Chiefs of Staff for review, refinement, and submission to the Secretary of Defense. In June 2001, the Chairman stated that it was important that the space control strategy be put on hold until it could be aligned with the national security and national military strategies that were being updated before official submission to the Secretary of Defense. Also, the space control strategy was drafted initially without the benefit of the broader national security space plan to use as a foundation for setting priorities, objectives, and goals. The National Security Space Integration Office expects to complete the space plan in the summer of 2002; however, there are indications that the plan may not be completed until 2003. Whenever the plan is completed, DOD would then have to reexamine the draft space control strategy to ensure alignment with the broader plan. Currently, the services are not satisfied with the draft strategy. Army, Navy, and Air Force officials told us that the draft was not specific enough in terms of what their own responsibilities are going to be and what DOD’s priorities are going to be. They also pointed out that there were no specific milestones, only a rough 20-year time frame for achieving a “robust and wholly integrated suite of capabilities in space.” Without more specifics in this area, DOD would not be able to measure its progress in achieving goals. According to a U.S. Space Command official, although a final date for issuing the strategy is unknown, comments from the services have been incorporated where appropriate and additional detail has been added to reflect changes in DOD terminology. Without knowing more details, service officials said that they would continue pursuing their own space control programs as they have been. In fact, two services—the Air Force and the Army—have already set their own priorities for space control. For example, Air Force Space Command, in its Strategic Master Plan, lists its first priority under space control as improving space surveillance capability to achieve real-time space situational awareness and provide this information to the warfighter. The Army’s Space Master Plan recognized shortfalls in the space control area and identified future operational capabilities for space control that include space-based laser, airborne laser and the congressionally-directed Kinetic Energy Anti-Satellite capability. Another issue that could affect accountability for space control is the lack of a DOD-wide investment plan for space control to guide the development of the services’ budget submissions. The Space Commission recognized that increasing funding visibility and accountability is essential to developing a coherent space program. According to the commission, for example, the current decentralized approach of funding satellites from one service’s budget and terminals from another’s can result in program disconnects and duplication. The newly implemented virtual major force program for space addresses the need for visibility into space funding across the services by aggregating most space funding by service and function. DOD officials stated that the first iteration of the virtual major force program captured a high percentage of space funding and it will be fine tuned in the future years. The virtual major force program for space was designed to include program elements that represent space activities only. Funding for non-space-weapon systems that may have some space related components (such as a Global Positioning System receiver in the bomb hardware of the Joint Direct Attack Munition bombs) are not included in the virtual major force program. Although the virtual major force program provides greater visibility into space funding, it is not intended to provide an investment plan for space. However, the space control systems and funding identified in the virtual major force program, along with priorities outlined in the space control strategy, could be used as a basis for developing an investment plan that would prioritize space control capabilities that DOD needs to develop. Such a plan would benefit DOD by setting DOD-wide priorities and helping the services make decisions on meeting those priorities; including short-, mid-, and long-range time frames to make sure space control activities were carried out as envisioned in DOD’s overall goals and the national security space plan; establishing accountability mechanisms to make sure funding is targeted at priority areas; and providing the level of detail needed to avoid program disconnects and duplications. Developing such an investment plan for space control will be a considerable challenge because it will require the services to forgo some of their authority to set priorities. Secondly, DOD will need to identify space capabilities that are scattered across programs and services, and in many instances, are even embedded in non-space-weapon systems. Finally, development of an investment plan for space control will require leadership on the part of the Air Force, as the executive agent for space, because such a plan will have to balance the needs and priorities of all of the services. The changes DOD has made to its acquisition policy embracing practices that characterize successful programs are a positive step that could be applied to the acquisition of space control systems. By separating technology development from product development (system integration and system demonstration) and encouraging an evolutionary approach, for example, the new policy would help to curb incentives to over promise the capabilities of a new system and to rely on immature technologies. Moreover, decisionmakers would also have the means for deciding not to initiate a program if a match between requirements and available resources (time, technology, and funding) was not made. But, so far, DOD has been challenged in terms of successfully implementing acquisition practices that would reduce risks and result in better outcomes—particularly in some of its larger and more complex programs. For example, in 1996, DOD designated the Space-Based Infrared System (SBIRS), consisting of a Low and High program, a Flagship program for incorporating a key acquisition reform initiative aimed at adopting successful practices that would develop systems that are generally simpler, easier to build, and more reliable, and that meet DOD needs. In 2001, we reported that the SBIRS Low program, in an attempt to deploy the system starting in fiscal year 2006 to support a missile defense capability for protecting the United States, was at high-risk of not delivering the system on time or at cost or with expected performance. In particular, we reported that five of six critical satellite technologies had been judged to be immature and would not be available when needed. As stressed in previous GAO reports, failure to make sure technologies are sufficiently mature before product development often results in increases in both product and long-term ownership costs, schedules delays, and compromised performance. The SBIRS Low program has recently undergone restructuring in an attempt to control escalating costs and get back on schedule. In 2001, we reported that the SBIRS High program was in jeopardy because (1) ground processing software might not be developed in time to support the first SBIRS High satellite, and (2) sensors and satellites might not be ready for launch as scheduled due to technical development problems. These difficulties increased the risk that the first launches of SBIRS High sensors and satellites would not occur on time and that mission requirements would not be met. The Under Secretary of the Air Force recently acknowledged that the SBIRS High program was allowed to move through programmatic milestones before the technology was ready. In addition, the Under Secretary of Defense for Acquisition, Technology and Logistics recommended modifications to the SBIRS High requirements to meet realistic cost and performance goals. As we recently testified, there are actions DOD can take to make sure that new acquisition policies produce better outcomes for acquisitions of space control systems (or any other space systems). These include structuring programs so that requirements will not outstrip available establishing measures for success for each stage of the development process so that decisionmakers can be assured that sufficient knowledge exists about critical facets of a product before investing more time and money, and placing responsibility for making decisions squarely in those with authority to adhere to best practices and to make informed trade-off decisions. Our prior reports have recommended actions that DOD could take in these and other areas. DOD recognizes that space systems are playing an increasingly important role in DOD’s overall warfighting capability as well as the economy and the nation’s critical infrastructure. Its recent actions are intended to help elevate the importance of space within the Department, and also improve coordination, priority setting, and interoperability. But there are substantial challenges facing DOD’s efforts to achieve its objectives for space control. Principally, the services and the U.S. Space Command have not agreed to the specifics of a strategy, especially in terms of roles and responsibilities. DOD still lacks an investment plan that reflects DOD-wide space control priorities and can guide the development of the services budget submissions for space control systems and operations. Moreover, it is still questionable whether DOD can successfully apply best practices to its space control acquisitions. Clearly, success for space control will depend largely on the support of top leaders to set goals and priorities, ensure an overall investment plan meets those goals and priorities, as well as encourage implementation of best practices. To better meet the challenges facing efforts to strengthen DOD’s space control mission, we recommend that the Secretary of Defense align the development of an integrated strategy with the overall goals and objectives of the National Security Space Strategy, when issued. The Secretary should also ensure that the following factors are considered in finalizing the integrated space control strategy: roles and responsibilities of the military services and other DOD organizations for conducting space control activities, priorities for meeting those space control requirements that are most essential for the warfighter, milestones for meeting established priorities, and end states necessary for meeting future military goals in space control. We further recommend that the Secretary of Defense develop an overall investment plan that: supports future key goals, objectives, and capabilities that are needed to meet space control priorities, and supports the end states identified in the integrated space control strategy, and is aligned with the overall goals and objectives of the national security space strategy. We received written comments on a draft of this report from the Secretary of Defense. DOD concurred with our findings and recommendations. It also offered additional technical comments and suggestions to clarify our draft report, which we incorporated as appropriate. DOD’s comments appear in appendix I. To identify DOD’s efforts to strengthen its ability to protect and defend its space assets and the challenges facing DOD in making those space control efforts successful, we reviewed the DOD Instruction for Space Control, U.S. Space Command’s draft Space Control Strategy, U.S. Space Command’s Long Range Plan, military service space master plans, DOD’s 1999 Space Policy, the Report of the Commission to Assess United States National Security Space Management and Organization, and the 2001 Quadrennial Defense Review. We also reviewed national and DOD space policies and DOD’s Future Years Defense program from fiscal year 2002 through 2007. To understand DOD’s efforts and challenges, we reviewed the draft space control strategy and held discussions with officials at the U.S. Space Command, Colorado Springs, Colorado. To gain a better understanding of how the services regarded the draft space control strategy and development of a corresponding investment plan, we held discussions with and obtained documentation from officials at the Air Force Space Command, Peterson Air Force Base, Colorado Springs, Colorado; Air Force Headquarters, Washington, D.C.; the Army Space and Missile Defense Command, Arlington, Virginia; the Naval Space Command Detachment, Peterson Air Force Base, Colorado Springs, Colorado; the Office of the Assistant Secretary of Defense for Command, Control, Communications and Intelligence; the Joint Staff; Under Secretary of Defense Comptroller/Chief Financial Officer and Director, Program, Analysis and Evaluation; the Office of the National Security Space Architect, Fairfax, Virginia; and the RAND’s National Security and Research Division, Washington, D.C. To identify the acquisition challenges, we reviewed prior GAO reports on practices characterizing successful acquisition program and held discussions with DOD officials. Specifically, we held discussions with and obtained documentation from representatives of the Under Secretary of Defense for Acquisition, Technology, and Logistics and officials with the Air Force/National Reconnaissance Office Integration Planning Group. We performed our work from July 2001 through July 2002 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of the Army, the Navy, and the Air Force; the Director of the Office of Management and Budget; and interested congressional committees. We will also make copies available to others on request. The head of a federal agency is required under 31 U.S.C. 720 to submit a written statement of actions taken on our recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform no later than 60 days after the date of the report and to the Senate and House Committee on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of the report. In addition, the report will be available at no charge at the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-4841 or Jim Solomon at (303) 572-7315. The key contributors to this report are acknowledged in appendix II. Key contributors to this report were Cristina Chaplain, Maricela Cherveny, Jean Harker, Art Gallegos, and Sonja Ware.
The United States is increasingly dependent on space for its security and well being. The Department of Defense's (DOD) space systems collect information on capabilities and intentions of potential adversaries. They enable military forces to be warned of a missile attack and to communicate and navigate while avoiding hostile action. DOD's efforts to strengthen space control are targeted at seeking to promote better coordination among DOD components, prioritization of projects, visibility and accountability over funding, and interoperability among systems. Among other things, DOD is drafting a space control strategy that is to outline objectives, tasks, and capabilities for the next 20 years. It has also aggregated funding for space programs so that it can compare space funding, including space control funding, to its total budget, make decisions about priorities, and conduct future-trend analyses. In addition, DOD has changed its acquisition policy to include separating technology development from product development and encouraging an evolutionary, or phased, approach to development. There are, however, substantial challenges to making DOD's space control efforts successful. One challenge is putting needed plans in place to provide direction and hold the services accountable for implementing departmentwide priorities for space control. Further, DOD's draft space control strategy has been completed and does not yet define roles and responsibilities among the services, departmentwide priorities and end states, and concrete milestones. Finally DOD's aggregation of space funding is not a plan that targets investments at priority areas for DOD overall.
You are an expert at summarizing long articles. Proceed to summarize the following text: As reflected by federal statutes and a number of executive orders, it is the policy of the federal government to encourage the participation of small businesses, including businesses owned and controlled by socially and economically disadvantaged individuals, in the performance of federal procurement contracts. The Small Business Act established SBA as an independent agency of the federal government to aid, counsel, assist, and protect the interests of small business concerns; preserve free competitive enterprise; and maintain and strengthen the overall economy of the nation. Among other things, the act sets a minimum governmentwide goal for small business participation of not less than 23 percent of the total value of all prime contract awards for each fiscal year and makes SBA responsible for reporting annually on agencies’ achievements on their procurement goals. The act authorizes the President to establish the annual governmentwide goals. To meet its responsibilities under the act, SBA negotiates annual procurement goals with each federal executive agency with the intent to ultimately achieve the 23 percent governmentwide goal. Some agencies have goals higher than 23 percent, while others may have goals that are lower than or equal to 23 percent—SBA negotiates all of them with the intent that the governmentwide small business participation rate will not be less than the goal of 23 percent. Among the agencies we reviewed for this report, annual small business procurement goals for fiscal year 2005 ranged from 16 percent (NASA) to 56 percent (Interior). DOD’s goal was set at 23 percent, Treasury’s at 24 percent, and HHS’s at 30 percent. The Small Business Act also sets annual prime contract dollar goals for participation by certain types of small businesses that agencies strive to meet as part of their efforts to meet their overall small business participation goal. Specifically, these include goals for participation by SDBs (5 percent), businesses owned and controlled by women or service- disabled veterans (5 and 3 percent, respectively), and, businesses located in historically underutilized business zones (HUBZones, 3 percent). “Each department or agency that contracts with businesses to develop advertising for the department or agency or to broadcast Federal advertising shall take an aggressive role in ensuring substantial minority-owned entities’ participation, including 8(a), SDB, and Minority Business Enterprise (MBE) in Federal advertising-related procurements.” The criteria for determining a firm’s status as an 8(a) or SDB are set forth in section 8 of the Small Business Act and related regulations, while the definition of MBE is set forth in Executive Order 11625. Specifically, The 8(a) program, authorized by section 8(a) of the Small Business Act, was created to help small disadvantaged businesses compete in and access the federal procurement market. Generally, in order to be certified under SBA’s 8(a) program, a firm must satisfy SBA’s applicable size standards, be owned and controlled by one or more socially and economically disadvantaged individuals who are citizens of the United States, and demonstrate potential for success. Black Americans, Hispanic Americans, Native Americans, and Asian Pacific Americans are presumptively socially disadvantaged for purposes of eligibility. The personal net worth of an individual claiming economic disadvantage must be less than $250,000 at the time of initial eligibility and less than $750,000 thereafter. To qualify for SDB certification, a firm must be owned and controlled by one or more socially and economically disadvantaged individuals or a designated community development organization. Individuals presumed to be socially disadvantaged for purposes of the 8(a) program are also presumed to be socially disadvantaged for purposes of determining eligibility for SDB certification. In contrast to the 8(a) program applicants, businesses applying for SDB certification need not demonstrate potential for success, and the personal net worth of the owners may be up to $750,000 at the time of certification. SDBs are eligible for incentives such as price evaluation adjustments of up to 10 percent when bidding on federal contracts in certain industries. Prime contractors that achieve SDB subcontracting targets may receive evaluation credits for doing so. Section 8(a) firms automatically qualify as SDBs, but other firms may apply for SDB-only certification. Executive Order 11625 defines minority business enterprises as those businesses that are owned or controlled by one or more socially or economically disadvantaged persons. The order states that disadvantages could arise from cultural, racial, or chronic economic circumstances or background or from similar causes. Under the order, socially or economically disadvantaged individuals include, but are not limited to, African-Americans, Puerto Ricans, Spanish-speaking Americans, American Indians, Eskimos, and Aleuts. While the definition of “MBE” is similar to the definition of “socially and economically disadvantaged small business” for purposes of the 8(a) and federal SDB programs, unlike these programs, the order does not limit the term “MBE” to small businesses. Executive Order 13170 spells out specific responsibilities for SBA, OMB, and executive agencies with procurement authority. Generally, the order gives SBA responsibility for setting goals with agencies and publicly reporting the progress toward those goals. Although the order gives OMB general oversight responsibility for implementing the order, OMB and SBA officials told us that the two agencies had agreed that SBA would take on the oversight responsibilities because SBA already had programs in place to oversee the small business programs of federal agencies. Section 2(b) of the order directed federal agencies with procurement authority to develop long-term, comprehensive plans to, among other things, aggressively seek to ensure that businesses classified as 8(a), small disadvantaged, and minority-owned are aware of contracting opportunities and report annually on efforts to increase utilization of these businesses. Section 2(b) also directed OMB to review each of these plans and report to the President on the sufficiency of each plan to carry out the terms of the executive order. The federal government awards contracts for advertising-related services for a variety of reasons, but primarily to deliver messages about its programs and services. According to Advertising Age, the largest amount of federal advertising spending goes to procure television and magazine advertising. Within the federal government, as we noted earlier, the biggest buyer of these services is DOD, which is very often doing so as part of ongoing recruiting campaigns by the military services. Additionally, for example, the Treasury’s Bureau of Engraving and Printing procures the services of an advertising firm to promote public awareness and acceptance of changes to U.S. currency (e.g., the introduction of the redesigned currency). Similarly, NASA also uses advertising firms to help plan and carry out a variety of events held around the country intended to publicize its programs and ongoing space research as well to support internal purposes, such as organizing off-site conferences. The five agencies we reviewed implemented Executive Order 13170 primarily by continuing their existing efforts to broadly identify potential contracting opportunities with all types of small businesses, while three of the agencies addressed section 4 of the order by initiating new actions specific to advertising-related contracts. For example, HHS and NASA cited ongoing training efforts directed to procurement staff or small businesses as one way their agencies addressed the order. The five agencies’ focus on ongoing efforts was consistent with SBA’s and OMB’s views that several provisions of the order duplicated program requirements under existing legislation. Specific to advertising, Treasury officials indicated that the agency was building on existing relationships with trade associations in order to identify advertising contracting for SDBs. Earlier this year, Treasury also established new outreach efforts and reporting requirements for advertising contracts with 8(a), SDB, and minority-owned businesses. Rather than develop plans focused specifically on section 4 of the executive order, for the most part the five agencies that we reviewed said that they already had programs in place to address similar requirements in previous legislation and that these activities were consistent with the expectations of the order. In response to the order, agencies generally reemphasized to procurement officers in subagencies around the country (who are responsible for awarding contracts) each agency’s small business program policies and goals. While not directed specifically toward advertising contracts, these existing programs were designed to encourage the participation of small and minority-owned businesses in federal procurement. Treasury, HHS, and NASA spelled out their strategies for addressing the executive order in written implementation plans that they prepared pursuant to the requirements of section 2(b) of the order. DOD and Interior did not, as directed, prepare such plans at that time, but agency officials from those departments described to us the efforts they undertook. More specifically, to ensure that 8(a)s, SDBs, and minority- owned businesses were aware of contracting opportunities: Treasury indicated in its implementation plan that it would continue to maintain a Web site for small business procurement and would post annual forecasts of contracting opportunities there. Further, the agency stated that it would publicize contracting opportunities in the Commerce Business Daily and FedBizOpps (an Internet-based point-of-entry for federal government procurement opportunities) and use its existing relationship with a variety of trade associations to foster the development of small minority-owned and women-owned businesses to increase awareness of contracting opportunities. HHS’s plan stated that the agency would continue to train program and procurement officials through the HHS Acquisition Training Program on policies that affect federal procurement awards to 8(a), small disadvantaged, and minority-owned businesses. NASA’s plan stated that it would continue to provide a 3-day course, “Training and Development of Small Businesses in Advanced Technologies,” that was designed to increase the knowledge base of small businesses—including disadvantaged, 8(a), and women-owned businesses, and minority educational institutions—by improving their ability to compete for contracts in NASA’s technical and complex environment. Interior officials told us that they had disseminated information on future contracting opportunities to small businesses through the Internet, developed an advanced procurement plan, and conducted quarterly outreach meetings with potential small business contractors. DOD officials noted that the department’s Small Business Program adhered to the requirements set forth in the Small Business Act and other applicable statutory provisions and federal regulations. The officials further explained that they used FPDS-NG and the department’s internal database to monitor DOD’s progress toward meeting its small business program goals. The five agencies’ focus on enhancing their ongoing small business procurement programs to address section 4 of Executive Order 13170 was consistent with SBA’s and OMB’s views that some requirements in the order reflected previous legislation. Specifically, officials from SBA and OMB told us that several provisions of the executive order paralleled procurement program requirements under the Small Business Act and other existing legislation. As a result, these two agencies, which were assigned certain oversight and reporting responsibilities in section 2 of the order, agreed that SBA should address such responsibilities as part of its ongoing oversight activities under the Small Business Act. For example, the order requires SBA to conduct semiannual evaluations of the achievements in meeting governmentwide prime and subcontracting goals and the actual prime and subcontract awards to 8(a)s and SDBs for each agency and to make the information publicly available. However, prior to the issuance of the order, SBA was already evaluating awards to SDBs and publishing the information in its annual reports on the goals and achievements of each agency’s procurement efforts. SBA’s goaling requirements were previously established by the Small Business Act. Our comparison of the order to existing legislation also showed that almost all of the requirements in the order had already been reflected in previous legislation. For example, the order required agencies to ensure that minority-owned businesses are aware of future prime contracting opportunities, an existing requirement under the Small Business Act and the Federal Acquisition Regulation (FAR). Similarly, the order requires that the directors of the Offices of Small and Disadvantaged Business Utilization (OSDBU) carry out their responsibilities to maximize the participation of 8(a)s and SDBs in federal procurement, a requirement that was previously set forth in the Small Business Act. Specifically, the Small Business Act requires each covered agency to establish an OSDBU to be responsible for, among other things, the implementation and execution of the functions and duties under the sections of the act that pertain to the 8(a) and SDB programs in each agency. We found that many of the activities mentioned in the agencies’ implementation plans or described to us highlighted actions that the agencies already had in place in their small business programs. Although agency officials at all five agencies indicated that their current small business programs broadly addressed procuring services from 8(a)s, SDBs, and minority-owned businesses, including advertising-related services, three of the five agencies we reviewed—HHS, Treasury, and Interior—planned new activities to increase federal advertising contracting opportunities for these businesses. For example, HHS stated in its implementation plan that it would make every effort to develop alternative strategies to maximize small and minority business participation in its advertising contracts at both the prime contracting and subcontracting levels. HHS also directed staff from its OSDBU to work with its operational divisions to ensure that all advertising efforts were properly structured under the Federal Acquisition Regulation. In order to direct federal advertising procurement opportunities to 8(a)s, SDBs, and minority-owned businesses, Treasury stated in its implementation plan that it would identify contracting opportunities for SDBs in advertising and information technology by building on its existing relationships with trade associations. Treasury had previously established a memorandum of understanding (MOU) with several trade associations, including the Minority Business Summit Committee and the U.S. Pan Asian American Chamber of Commerce. Treasury intended the MOU to foster an environment that would allow small and minority-owned firms to compete successfully for Treasury contracts and subcontracts. According to Treasury officials, the focus on advertising in Executive Order 13170 allowed the department to leverage an existing program by adding a component specifically for advertising and information technology services. In addition to the efforts to partner with trade organizations that it began in 2001, Treasury issued an acquisition bulletin in January 2007 establishing additional outreach efforts and reporting requirements relating to the procurement of federal advertising services from 8(a)s, SDBs, and minority-owned businesses. The bulletin requires (1) small business specialists located at Treasury’s bureaus to use databases and other sources to identify minority-owned entities to solicit for advertising- related services, and (2) Treasury’s bureaus to report all contract actions related to federal advertising to the Office of Procurement Executive for contracts awarded from March 1, 2007, through September 30, 2007. Interior, which had previously relied on its existing small business program to address the order, is currently drafting an implementation plan that will, according to the department’s OSDBU officials, propose activities to increase advertising opportunities for 8(a)s, SDBs, and minority-owned businesses. Interior plans to convey through its efforts that the department’s OSDBU is available to make the process of doing business with Interior simpler and more consistent across Interior’s component subagencies. Interior plans to target all small businesses whose owners include representatives from socioeconomic groups identified as disadvantaged in the Small Business Act. Overall, from fiscal years 2001 through 2005, 8(a), small disadvantaged, and minority-owned businesses received about 5 percent of the $4.3 billion in advertising-related obligations awarded by DOD, Interior, HHS, Treasury, and NASA. These businesses accounted for 12 percent of the contract actions that the five agencies awarded, but the percentages the agencies awarded varied substantially. For example, Treasury awarded less than 2 percent of its advertising-related dollars to 8(a)s, SDBs, and minority-owned businesses over the 5-year period, while HHS awarded about 25 percent to these business types. Advertising dollars also varied from one year to the next at individual agencies, sometimes significantly, primarily because of large advertising campaigns that the respective agencies undertook to publicize new programs or promote their mission (e.g., public health). The extent to which agencies’ yearly increases in overall advertising obligations affected obligations to 8(a), small disadvantaged, and minority-owned firms also varied. According to federal procurement data, the federal government obligated about $4.8 billion to contractors for advertising-related services from fiscal years 2001 through 2005. During this period, the five agencies obligated $4.3 billion for advertising-related services—about 92 percent of total advertising-related obligations for the federal government (this amount consists of $3.4 billion of their own funds, and another $919 million on behalf of other agencies). As shown in figure 1, DOD accounted for over half of all advertising-related obligations during this period. Treasury (Other agency funded) From fiscal years 2001 through 2005, the five agencies we reviewed collectively obligated about $218 million to businesses designated as 8(a), small disadvantaged, or minority-owned; individually, their utilization of these businesses varied widely (fig. 2). Specifically, HHS awarded the highest dollar amount to 8(a), SDB, and minority-owned businesses during the 5-year period—about $122 million—and NASA awarded the highest percentage of its dollars to these businesses—89 percent, or about $41 million. During this period, the five agencies awarded a total of 6,279 contract actions, about 12 percent of which (725) were awarded to businesses with these designations (fig. 3). Individually, the extent to which agencies awarded contract actions to 8(a), SDB, and minority-owned businesses varied widely, with Treasury awarding none on behalf of other agencies to these types of businesses and NASA awarding 45 percent. The number of contract actions awarded to 8(a)s, SDBs, and minority-owned businesses ranged from 0 at Treasury (administered for other agencies) to 449 at DOD. Individually, the agencies we reviewed awarded different percentages of their advertising-related contracting dollars and actions to these types of small businesses. For example, on average, NASA awarded more than 80 percent of its total advertising-related obligations to businesses in each of these categories for the 5-year period. Except for 8(a)s and SDBs in 2001, NASA consistently awarded 66 percent or more of its advertising-related obligations to the three types of businesses. In contrast, Treasury and DOD on average awarded 1.7 percent or less of their advertising-related obligations to 8(a), SDB, or minority-owned businesses. Figure 4 shows the amount of advertising-related obligations awarded by each agency to each of the three business types for 5 fiscal years as well as the total for the 5-year period. Similarly, figure 5 shows the number of advertising- related contact actions awarded by each agency to each of the three business types for each year and the total for the 5-year period. Contracting dollars and actions awarded directly to businesses can be counted in more than one category, so the dollars and actions awarded to various types of small businesses are not mutually exclusive. As we noted earlier in this report, federal agencies award contracts for advertising-related services for a variety of reasons, the primary one being to deliver messages about the agencies’ programs and services. The advertising services that agencies procured ranged from recruiting and public service announcements to public relations. We found that advertising dollars for agencies sometimes varied significantly from one year to the next (table 1) and that these differences were mostly the result of large advertising campaigns specific to the individual agencies. While we noted the year-to-year variations in agencies’ overall advertising obligations, we also observed that these variations did not always translate into a direct effect on the share of agencies’ advertising obligations that went to 8(a)s, SDBs, or minority-owned businesses (fig. 4). For example, during the 5-year period under review, DOD showed an upward trend of increasing obligations for advertising-related procurement, with the largest increase occurring in fiscal year 2005 (about 31 percent). DOD officials attributed the increase in advertising expenses to confronting the challenge of continuing to fill the military ranks with recruits and reenlistees in the midst of war. More specifically, during fiscal year 2005 DOD awarded multiple actions on four ongoing unrelated large contracts with obligations ranging from about $60 million to $175 million. None of these new fiscal year 2005 contract actions used 8(a)s, SDBs, or minority- owned businesses. Overall, however, DOD increased its advertising-related obligations to each of the three business types (from 2004 to 2005). HHS officials told us that HHS’s higher advertising-related obligations in fiscal years 2001 and 2003 were mostly attributable to the Centers for Disease Control and Prevention’s (CDC) development of a Youth National Media Program. In support of this initiative, two advertising programs were conceived: the National Youth Media Campaign and the Targeted Communities-Youth Media Campaign. HHS obligated $151 million for these two campaigns. A portion of these obligations—just over $48 million—was awarded to two minority-owned businesses, one of which was also certified as a SDB. These campaigns included HHS’s VERB advertisement targeted toward children and teens to promote more physical activity (fig. 6). The increase in HHS’s overall advertising obligations from 2002 to 2003 did not have a uniformly similar effect on the obligations the agency directed to 8(a)s, SDBs, and minority-owned firms, even though, as we note, some of the spending for the youth initiative was directed to small disadvantaged and minority-owned firms. Specifically, as a percentage of the agency’s total advertising obligations, HHS’s total obligations to 8(a) and minority-owned firms decreased from 2002 to 2003, and its obligations to SDBs showed an increase of less than 1 percent. About 84 percent of NASA’s obligations were related to two contracts. The first of these was a contract NASA’s Langley Research Center awarded in fiscal year 2001 to an SDB for a variety of public relations activities. For example, this contractor provided services such as preparing written and photographic materials for media and selected internal and external communications programs and administrative support for outreach and special events. The contractor also developed, installed, and maintained exhibits for events and provided logistical and general services to plan and conduct conferences, symposia, peer reviews, and workshops for off-site conferences and events. From fiscal years 2001 through 2005, NASA obligated over $5 million for this contract. The second NASA contract was awarded in fiscal year 2002 as a multiple- year contract by its Marshall Space Flight Center. The contract was originally awarded to a business classified as minority-owned that was later admitted to the 8(a) and SDB programs. The contractor was primarily responsible for providing support services to human resources, educational programs, government and community relations, public exhibits, internal communications, employee training, and organizational development. More recently, the contractor has helped with several public events, including an air show called Thunder in the Valley in Columbus, Georgia, in March 2007 and the X-Prize Cup in Las Cruces, New Mexico, in October 2006. Between fiscal years 2002 and 2005, NASA obligated over $33 million for this contract. Generally, agency officials told us that the procurement decisions reflected in their overall contracting data and specifically the advertising contracting data we present here were more often driven by needs identified at the subagency or local area level than by the departments’ needs. These decisions are made by contracting officers at procurement offices, which are located around the country. For example, NASA has contracting officers at each of its 10 field centers. Each center specializes in different areas of research and technology specific to NASA. The Marshall Space Flight Center in Huntsville, Alabama, for example, focuses on space exploration, with specific emphasis on completing the international space station and returning to the moon. Contract-related needs for the Marshall Space Flight Center seek to advance technology in the space exploration area, and officials at the center identify contracting opportunities to meet those needs. Officials in NASA’s OSDBU told us that they did not tell contracting officers what services should be directed to small businesses, because the businesses that were selected to provide a service were chosen based on need (as identified by the centers’ contracting officers and other officials) and ability to meet the center’s requirements. We provided SBA, OMB, GSA, DOD, Treasury, HHS, Interior, and NASA with a draft of this report for review and comment. After reviewing the report, all agencies responded that they did not have any comments, including any of a technical nature. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to the Ranking Member of the Senate Committee on Small Business and Entrepreneurship, the Chair and Ranking Member of the House Committee on Small Business, and other interested congressional committees. In addition, we will send copies to the Secretaries of Defense, Treasury, Health and Human Services, and Interior, as well as NASA’s Administrator, the Administrator of General Services, the Administrator of the Small Business Administration, and the Director of the Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In this report, we describe (1) strategies that the Departments of Defense (DOD), Interior, Health and Human Services (HHS), Treasury, and National Aeronautics and Space Administration (NASA) used to address section 4 of Executive Order 13170, and (2) the total obligations, the number of contract actions, and the percentage of total obligations represented by these contract actions that each of the five agencies awarded to businesses in the Small Business Administration’s (SBA) 8(a) and federal small disadvantaged business (SDB) programs and to minority- owned businesses for advertising-related services. We queried the Federal Procurement Data System-Next Generation (FPDS-NG) using the product service codes for advertising and public relations to identify advertising- related activity. Using these data, we judgmentally selected the four agencies that had obligated the most funds (DOD, Interior, HHS, and Treasury) and one that had a high participation of 8(a), small disadvantaged, and minority-owned businesses (NASA), based on identification in the FPDS-NG of awards to 8(a)s, SDBs, and minority- owned businesses for advertising-related contracts for fiscal years 2001 to 2005. In total, these agencies represented about 92 percent of all federal advertising-related obligations for this 5-year period. To describe strategies used by the five federal agencies to address section 4 of Executive Order 13170, we obtained documentation from the agencies outlining the actions they planned to take to implement the order, interviewed agency officials regarding their plans and actions taken, and compared both their planned actions as well as actions taken to the requirements presented in the order. We also interviewed officials at SBA and Office of Management and Budget (OMB) regarding the oversight responsibilities each was given in implementing the order. Furthermore, we identified a number of federal statutes and regulations pertaining to executive agency procurement and small business programs, including the Small Business Act and the Federal Acquisition Regulations, that were consistent with the requirements of section 2 of the Executive Order 13170. To determine the total dollar amount for each of the five agency’s advertising-related obligations for fiscal years 2001 through 2005 and the dollar amount and percentage for obligations and contract actions awarded to businesses designated as 8(a)s, SDBs, or minority-owned during that same time period, we extracted key data fields from FPDS-NG. These data fields included contracting department, procurement instrument identifier (contract/order number), advertising-related product or service codes (R701 and R708), SDB firm designation, 8(a) firm designation, minority-owned designation, and funding agency. We then analyzed the data to identify the total amount of advertising-related obligations for each agency for each fiscal year and the amount and percentage of the total for obligations and number of contract actions awarded to 8(a)s, SDBs, and minority-owned businesses. We did not use FPDS data from before fiscal year 2001 because agencies did not consistently report certain data elements important to our analysis, such as minority ownership. Even after the General Services Administration (GSA) upgraded the system to FPDS-NG in 2003 (to capture, among other things, a data element for minority ownership from fiscal year 2004 forward), agencies varied in the extent to which they modified earlier years’ data to reflect this information. In assessing the reliability of federal contracting data, we interviewed officials from GSA, the agency responsible for maintaining FPDS-NG. Additionally, we performed specific steps using the FPDS-NG data. First, we compared FPDS-NG advertising totals to the Federal Procurement Data System, the previous governmentwide contracting system, for the five agencies for fiscal years 2001 through 2003. For the DOD data, we also compared FPDS-NG advertising totals to DD-350 (DOD’s internal contracting database) totals for fiscal years 2001 through 2005. In these comparisons, we found some differences between the databases that we determined could be attributed to the fact that FPDS-NG was a real-time system that allowed for editing and updates, such as updates to the primary purpose of a multiple-year contract in later years. FPDS and DD- 350 did not allow for such real-time changes. On the basis of this assessment, we concluded that FPDS-NG data were sufficiently reliable for the purposes of our report. Next, we tested the reliability of the 8(a), SDB, and minority-owned designations in FPDS-NG. To do this, we electronically compared the FPDS-NG designations for 8(a) and SDB to SBA’s list of certified 8(a)s and SDBs. We found a small number of certified 8(a) businesses that were not designated as such in FPDS-NG. For DOD contractors that the DD-350 also identified as being 8(a), we modified our data to reflect the certified 8(a) status. To determine the reliability of the minority-owned data in FPDS- NG, we compared FPDS-NG contractor data for fiscal years 2004 and 2005 to the self-reported minority-owned designations in SBA’s Small Disadvantaged Businesses file and the Central Contractor Registration database (the DOD database that also serves as the primary vendor database for the U.S. government). We found that the number of minority- owned businesses that received advertising-related contracts from these five agencies was undercounted in the FPDS-NG for fiscal years 2004 and 2005. We could not determine the degree of undercounting because our analysis was not based on a sample that could be generalized to the population of advertising-related contractors. Other than the minor differences that we found, we determined that the business designations were sufficiently reliable for the purposes of our report. We conducted our work in Washington, D.C., between October 2006 and June 2007 in accordance with generally accepted government auditing standards. In addition to the individual named above, Bill MacBlane, Assistant Director; Johnnie Barnes; Michelle Bracy; Emily Chalmers; Julia Kennon; Lynn Milan; Marc Molino; Omyra Ramsingh; and Rhonda Rose made key contributions to this report.
In 2005, federal spending on advertising exceeded $1 billion. Five agencies--DOD, Treasury, HHS, Interior, and NASA--together made up over 90 percent of this spending from 2001 to 2005. Executive Order 13170, signed in October 2000, directs agencies to take an aggressive role in ensuring substantial participation in federal advertising contracts by businesses in the Small Business Administration's (SBA) 8(a) and small disadvantaged business (SDB) programs and minority-owned businesses. This report describes (1) strategies DOD, HHS, Treasury, Interior, and NASA used to address Executive Order 13170, and (2) the total obligations, number of contract actions, and percentage of total obligations represented by these actions that each agency awarded to 8(a)s, SDBs, and minority-owned businesses for advertising services. In conducting this study, GAO analyzed agency contracting data and executive order implementation plans and interviewed agency procurement officials. Because much of Executive Order 13170 was consistent with existing legislation, the five agencies we reviewed generally addressed the order's emphasis on advertising contracts by continuing existing programs designed to identify potential contracting opportunities with all types of small businesses. The five agencies' focus on ongoing efforts was consistent with SBA's and the Office of Management and Budget's (OMB) views that several provisions of the order paralleled procurement program requirements under the Small Business Act. Three agencies--HHS, Treasury, and Interior--also planned additional activities that targeted the agency's contracting efforts for advertising services. For example, one of Treasury's additional activities was to work with trade associations to identify opportunities for SDBs in advertising. From fiscal years 2001 through 2005, 8(a), SDB, and minority-owned businesses received about 5 percent of the $4.3 billion in advertising-related obligations of DOD, Treasury, HHS, Interior, and NASA and 12 percent of the contract actions that these agencies awarded; the percentages varied substantially among each of the five agencies. For example, Treasury awarded less than 2 percent of its advertising-related dollars to 8(a)s, SDBs, and minority-owned businesses collectively over the 5-year period, while NASA awarded about 89 percent to these types of businesses. Overall advertising obligations also varied from one year to the next at individual agencies, sometimes significantly. Year-to-year increases were driven by large campaigns that the respective agencies undertook to publicize new programs or promote their mission (e.g., public health). Agencies varied in the extent to which year-to-year increases in overall advertising obligations had a similar effect on obligations to 8(a), small disadvantaged, and minority-owned firms.
You are an expert at summarizing long articles. Proceed to summarize the following text: To obtain a full funding grant agreement, a project must first progress through a local or regional review of alternatives, develop preliminary engineering plans, and obtain FTA’s approval for final design. TEA-21 requires that FTA evaluate projects against “project justification” and “local financial commitment” criteria contained in the act (see fig. 1). FTA assesses the project justification and technical merits of a project proposal by reviewing the project’s mobility improvements, environmental benefits, cost-effectiveness, and operating efficiencies. In assessing a project’s local financial commitment, FTA assesses the project’s finance plan for evidence of stable and dependable financing sources to construct, maintain, and operate the proposed system or extension. Although FTA’s evaluation requirements existed prior to TEA-21, the act requires FTA to (1) develop a rating for each criterion as well as an overall rating of “highly recommended,” “recommended,” or “not recommended” and use these evaluations and ratings in approving projects’ advancement toward obtaining grant agreements; and (2) issue regulations on the evaluation and rating process. TEA-21 also directs FTA to use these evaluations and ratings to decide which projects to recommend to the Congress for funding in a report due each February. These funding recommendations are also reflected in DOT’s annual budget proposal. In the annual appropriations act for DOT, the Congress specifies the amounts of funding for individual New Starts projects. Historically, federal capital funding for transit systems, including the New Starts program, has largely supported rail systems. Under TEA-21 the FTA Capital Program has been split 40 percent/40 percent/20 percent among New Starts, Rail Modernization, and Bus Capital grants. Although fixed- guideway bus projects are eligible under the New Starts program, relatively few bus-related projects are now being funded under this program. Although FTA has been faced with an impending transit budget crunch for several years, the agency is likely to end the TEA-21 authorization period with about $310 million in unused commitment authority if its proposed fiscal year 2003 budget is enacted. This will occur for several reasons. First, in fiscal year 2001, the Congress substantially increased FTA’s authority to commit future federal funding (referred to as contingent commitment authority). This allowed FTA to make an additional $500 million in future funding commitments. Without this action, FTA would have had insufficient commitment authority to fund all of the projects ready for a grant agreement. Second, to preserve commitment authority for future projects, FTA did not request any funding for preliminary engineering activities in the fiscal year 2002 and 2003 budget proposals. According to FTA, it had provided an average of $150 million a year for fiscal years 1998 through 2001 for projects’ preliminary engineering activities. Third, FTA took the following actions that had the effect of slowing the commitment of funds or making funds available for reallocation: FTA tightened its review of projects’ readiness and technical capacity. As a result, FTA recommended fewer projects for funding than expected for fiscal years 2002 and 2003. For example, only 2 of the 14 projects that FTA officials estimated last year would be ready for grant agreements are being proposed for funding commitments in fiscal year 2003. FTA increased its available commitment authority by $157 million by releasing amounts associated with a project in Los Angeles for which the federal funding commitment had been withdrawn. Although the New Starts program will likely have unused commitment authority through fiscal year 2003, the carry-over commitments from existing grant agreements that will need to be funded during the next authorization period are substantial. FTA expects to enter the period likely covered by the next authorization (fiscal years 2004 through 2009) with over $3 billion in outstanding New Starts grant commitments. In addition, FTA has identified five projects estimated to cost $2.8 billion that will likely be ready for grant agreements in the next 2 years. If these projects receive grant agreements and the total authorization for the next program is $6.1 billion—-the level authorized under TEA-21—most of those funds will be committed early in the authorization period, leaving numerous New Starts projects in the pipeline facing bleak federal funding possibilities. Some of the projects anticipated for the next authorization are so large they could have considerable impact on the overall New Starts program. For example, the New York Long Island Railroad East Side Access project may extend through multiple authorization periods. The current cost estimate for the East Side Access project is $4.4 billion, including a requested $2.2 billion in New Starts funds. By way of comparison, the East Side Access project would require about three times the total and three times the federal funding of the Bay Area Rapid Transit District Airport Extension project, which at about $1.5 billion was one of the largest projects under TEA-21. In order to manage the increasing demand for New Starts funding, several proposals have been made to limit the amount of New Starts funds that could be applied to a project, allowing more projects to receive funding. For instance, the President’s fiscal year 2002 budget recommended that federal New Starts funding be limited to 50 percent of project costs starting in fiscal year 2004. (Currently, New Starts funding—and all federal funding—is capped at 80 percent.) A 50 percent New Starts cap would, in part, reflect a pattern that has emerged in the program. Currently, few projects are asking for the maximum 80 percent federal New Starts share, and many have already significantly increased the local share in order to be competitive under the New Starts program. In the last 10 years, the New Starts share for projects with grant agreements has been averaging about 50 percent. In April 2002, we estimated that a 50 percent cap on the New Starts share for projects with signed full funding grant agreements would have reduced the federal commitments to these projects by $650 million. Federal highway funds such as Congestion Mitigation and Air Quality funds can still be used to bring the total federal funding up to 80 percent. However, because federal highway funds are controlled by the states, using these funds for transit projects necessarily requires state- transit district cooperation. The potential effect of changing the federal share is not known. Whether a larger local match for transit projects could discourage local planners from supporting transit is unknown, but local planners have expressed this concern. According to transit officials, some projects could accommodate a higher local match, but others would have to be modified, or even terminated. Another possibility is that transit agencies may look more aggressively for ways to contain project costs or search for lower cost transit options. With demand high for New Starts funds, a greater emphasis on lower cost options may help expand the benefits of federal funding for mass transit; Bus Rapid Transit shows promise in this area. Bus Rapid Transit involves coordinated improvements in a transit system’s infrastructure, equipment, operations, and technology that give preferential treatment to buses on urban roadways. Bus Rapid Transit is not a single type of transit system; rather, it encompasses a variety of approaches, including 1) using buses on exclusive busways; or 2) buses sharing HOV lanes with other vehicles; and 3) improving bus service on city arterial streets. Busways—special roadways designed for the exclusive use of buses—can be totally separate roadways or operate within highway rights-of-way separated from other traffic by barriers. Buses on HOV-lanes operate on limited-access highways designed for long-distance commuters. Bus Rapid Transit on Busways or HOV lanes is sometimes characterized by the addition of extensive park and ride facilities along with entrance and exit access for these lanes. Bus Rapid Transit systems using arterial streets may include lanes reserved for the exclusive use of buses and street enhancements that speed buses and improve service. During the review of Bus Rapid Transit systems that we completed last year, we found at least 17 cities in the United States were planning to incorporate aspects of Bus Rapid Transit into their operations. FTA has begun to support the Bus Rapid Transit concept and expand awareness of new ways to design and operate high capacity Bus Rapid Transit systems as an alternative to building Light Rail systems. Because Light Rail systems operate in both exclusive and shared right-of-way environments, the limits on their length and the frequency of service are stricter than heavy rail systems. Light Rail systems have gained popularity as a lower-cost option to heavy rail systems, and since 1980, Light Rail systems have opened in 13 cities. Our September 2001 report showed that all three types of Bus Rapid Transit systems generally had lower capital costs than Light Rail systems. On a per mile basis, the Bus Rapid Transit projects that we reviewed cost less on average to build than the Light Rail projects, on a per mile basis. We examined 20 Bus Rapid Transit lines and 18 Light Rail lines and found Bus Rapid Transit capital costs averaged $13.5 million per mile for busways, $9.0 million per mile for buses on HOV lanes, and $680,000 per mile for buses on city streets, when adjusted to 2000 dollars. For the 18 Light Rail lines, capital costs averaged about $34.8 million per mile, ranging from $12.4 million to $118.8 million per mile, when adjusted to 2000 dollars. On a capital cost per mile basis, the three different types of Bus Rapid Transit systems have average capital costs that are 39 percent, 26 percent, and 2 percent of the average cost of the Light Rail systems we reviewed. The higher capital costs per mile for Light Rail systems are attributable to several factors. First, the Light Rail systems contain elements not required in the Bus Rapid Transit systems, such as train signal, communications, and electrical power systems with overhead wires to deliver power to trains. Light Rail also requires additional materials needed for the guideway—rail, ties, and track ballast. In addition, if a Light Rail maintenance facility does not exist, one must be built and equipped. Finally, Light Rail vehicles, while having higher carrying capacity than most buses, also cost more—about $2.5 million each. In contrast, according to transit industry consultants, a typical 40-foot transit bus costs about $283,000, and a higher-capacity bus costs about $420,000. However, buses that incorporate newer technologies for low emissions or that run on more than one fuel can cost more than $1 million each. We also analyzed operating costs for six cities that operated both Light Rail and some form of Bus Rapid Transit service. Whether Bus Rapid Transit or Light Rail had lower operating costs varied considerably from city to city and depended on what cost measure was used. In general, we did not find a systematic advantage for one mode over the other on operating costs. The performance of the Bus Rapid Transit and Light Rail systems can be comparable. For example, in the six cities we reviewed that had both types of service, Bus Rapid Transit generally operated at higher speeds. In addition, the capacity of Bus Rapid Transit systems can be substantial; we did not see Light Rail having a significant capacity advantage over Bus Rapid Transit. For example, the highest ridership we found on a Light Rail line was on the Los Angeles Blue Line, with 57,000 riders per day. The highest Bus Rapid Transit ridership was also in Los Angeles on the Wilshire-Whittier line, with 56,000 riders per day. Most Light Rail lines in the United States carry about half the Los Angeles Blue Line ridership. Bus Rapid Transit and Light Rail each have a variety of other advantages and disadvantages. Bus Rapid Transit generally has the advantages of (1) being more flexible than Light Rail, (2) being able to phase-in service rather than having to wait for an entire system to be built, and (3) being used as an interim system until Light Rail is built. Light Rail has advantages, according to transit officials, associated with increased economic development and improved community image, which they believe justify higher capital costs. However, building a Light Rail system can have a tendency to provide a bias toward building additional rail lines in the future. Transit operators with experience in Bus Rapid Transit systems told us that one of the challenges faced by Bus Rapid Transit is the negative stigma potential riders attach to buses. Officials from FTA, academia, and private consulting firms also stated that bus service has a negative image, particularly when compared with rail service. Communities may prefer Light Rail systems in part because the public sees rail as faster, quieter, and less polluting than bus service, even though Bus Rapid Transit is designed to overcome those problems. FTA officials said that the poor image of buses was probably the result of a history of slow bus service due to congested streets, slow boarding and fare collection, and traffic lights. FTA believes that this negative image can be improved over time through bus service that incorporates Bus Rapid Transit features. A number of barriers exist to funding improved bus systems such as Bus Rapid Transit. First, an extensive pipeline of projects already exists for the New Starts Program. Bus Rapid Transit is a relatively new concept, and many potential projects have not reached the point of being ready for funding consideration because many other rail projects are further along in development. As of March 2002, only 1 of the 29 New Starts projects with existing, pending or proposed grant agreements uses Bus Rapid Transit, and 1 of the 5 other projects near approval plans to use Bus Rapid Transit. Some Bus Rapid Transit projects do not fit the exclusive right-of- way requirements of the New Starts Program and thus would not be eligible for funding consideration. FTA also administers a Bus Capital Program with half the funding level of the New Starts Program; however, the existing Bus Capital Program is made up of small grants to a large number of recipients, which limits the program’s usefulness for funding major projects. Although FTA is encouraging Bus Rapid Transit through a Demonstration Program, this program does not provide funding for construction but rather focuses on obtaining and sharing information on projects being pursued by local transit agencies. Eleven Bus Rapid Transit projects are associated with this demonstration program.
The Federal Transportation Administration's (FTA) New Starts Program helps pay for designing and constructing rail, bus, and trolley projects through full funding grant agreements. The Transportation Equity Act for the 21st Century (TEA-21), authorized $6.1 billion in "guaranteed" funding for the New Starts program through fiscal year 2003. Although the level of New Starts funding is higher than ever, the demand for these resources is also extremely high. Given this high demand for new and expanded transit facilities across the nation, communities need to examine approaches that stretch the federal and local dollar yet still provide high quality transit services. Although FTA has been faced with an impending transit budget crunch for several years, it is likely to end the TEA-21 authorization period with $310 million in unused New Starts commitment authority if its proposed fiscal year 2003 budget is enacted. Bus Rapid Transit is designed to provide major improvements in the speed and reliability of bus service through barrier-separated busways, buses on High Occupancy Vehicle Lanes, or improved service on arterial streets. GAO found that Bus Rapid Transit was a less expensive and more flexible approach than Light Rail service because buses can be rerouted more easily to accommodate changing travel patterns. However, transit officials also noted that buses have a poor public image. As a result, many transit planners are designing Bus Rapid Transit systems that offer service that will be an improvement over standard bus service (see GAO-02-603).
You are an expert at summarizing long articles. Proceed to summarize the following text: Private sector participation and investment in transit is not new. In the 1800s, the private sector played a central role in financing early transportation infrastructure development in the United States. For example, original sections of the New York City Subway were constructed from 1899 to 1904 by a public-private partnership. New York City sought private sector bids for the first four contracts to construct and finance segments of the initial subway system. Ultimately, a 50-year private sector lease to operate and maintain the system was used. Another example is the City of Chicago’s “L” transit system, which was built from the 1880s through the 1920s and operated by the Chicago Rapid Transit Company, a privately owned firm. The construction of the system was financed by the private sector. In following years, transportation infrastructure development became almost wholly publicly funded. Conditions placed on federal transportation grants-in-aid limited private involvement in federally funded projects. More recently, there has been a move back towards policies that encourage more private and public blending of funding, responsibility, and control in transportation projects. The federal government has progressively relaxed restrictions on private participation in highway and transit projects serving public objectives. This change in federal policy toward considering transit projects that use alternative approaches has also created an opportunity for states to reexamine their own public-private partnership policies. Conventional transit projects generally follow a “design-bid-build” approach whereby the project sponsor contracts with separate entities for the discrete functions of a project, generally keeping much of the project responsibility and risk with the public sector. FTA defines alternative approaches, including public-private partnerships, as those that increase the extent of private sector involvement beyond the conventional design- bid-build project delivery approach. These alternative approaches contemplate a single private sector entity being responsible and financially liable for performing all or a significant number of functions in connection with a project. In transferring responsibility and risk for multiple project elements to the private sector partner, the project sponsor often has less control over the procurement and the private sector partner may have the opportunity to earn a financial return commensurate with the risks it has assumed (see fig. 1). With these alternative approaches, many of the project risks that would normally be borne by the project sponsor in a design-bid-build approach are transferred to or shared with the private sector. Risk transfer involves assigning responsibility for a project risk in a contract so that the private sector is accountable for nonperformance or errors. Project sponsors can transfer a range of key project risks to the private sector, including those related to design, financing, construction performance and schedule, vehicle supply, maintenance, operations, and ridership. For example, design risk refers to whether an error causes delays or additional costs, or causes the project to fail to satisfy legal or other requirements. Ridership risk refers to whether the actual number of passengers on the transit system reaches forecasted levels. However, some risks may not be transferable. Much of the federal government’s share of new capital investment in mass transportation has come through FTA’s New Starts program. Through the New Starts program, FTA identifies and recommends new fixed-guideway transit projects—including heavy, light, and commuter rail, ferry, and certain bus projects—for federal funding. Over the last decade, the New Starts program has resulted in funding state and local agencies with over $10 billion to help design and construct transit projects throughout the country and is FTA’s largest capital grant program for transit projects. Moreover, since the early 1970s, a significant portion of the federal government’s share of new capital investment in mass transportation has been initiated through the New Starts process, resulting in full funding grant agreements. FTA must prioritize transit projects for funding by evaluating, rating, and recommending potential projects on the basis of specific financial commitment and project justification criteria. Using criteria set by law, FTA evaluates potential transit projects and assigns ratings to them annually. These evaluation criteria reflect a range of benefits and effects of the proposed project, such as cost-effectiveness, as well as the ability of the project sponsor to fund the project and finance the continued operation of its transit system. FTA uses the evaluation and rating process to decide which projects to recommend to Congress for funding. As part of the New Starts process, FTA approves projects into three phases: preliminary engineering (in which the designs of project proposals are refined), final design (the end of project development in which final construction plans and cost estimates, among other activities, are completed), and construction (in which FTA awards the project a full funding grant agreement, providing a federal commitment of funds subject to the availability of appropriations) (see fig. 2). We have previously identified FTA’s New Starts program as a model for other federal transportation programs because of its use of a rigorous and systematic evaluation process to distinguish among proposed New Starts investments. However, we and other stakeholders and policymakers have also identified challenges facing the program. Among these challenges is the need to streamline the New Starts project approval process. Our past reviews, for example, found that many project stakeholders thought that FTA’s process for evaluating New Starts projects was too time consuming, costly, and complex. The New Starts grant process is closely aligned with the conventional design-bid-build approach, whereby the project sponsor contracts with separate entities for the design and construction of the project. In 2005, Congress authorized FTA to establish the Public-Private Partnership Pilot Program to demonstrate (1) the advantages and disadvantages of transit projects that use alternative approaches for new fixed-guideway capital projects and (2) how FTA’s New Starts program can be modified or streamlined for these alternative approaches. The pilot program allows FTA to study projects that incorporate greater private sector involvement through alternative project delivery and financing approaches; integrate a sharing of project risk; and streamline design, construction, and operations and maintenance. FTA can designate up to three project sponsors for the pilot program. Projects selected under the pilot program will be eligible for a simplified and accelerated review process that is intended to substantially reduce the time and cost to the sponsors of New Starts projects. This can include major modifications of the requirements and oversight tools. For example, FTA may offer concurrent project approvals into preliminary engineering and final design. Further, FTA may modify its risk-assessment process—which aims to identify issues that could affect a project’s schedule or cost—as well as other project reviews. The modification of any of FTA’s New Starts requirements and oversight tools will be on a case-by-case basis if FTA determines enough risk is transferred to and equity capital is invested by the private sector. In addition to major modifications, FTA may also make use of other tools (not unique to the pilot program) to expedite the review process. These include Letters of No Prejudice that allow a project sponsor to incur costs with the understanding that these costs may be reimbursable as eligible expenses (or eligible for credit toward the local match) should FTA approve the project for funding at a later date. FTA can also use Letters of Intent to signal an intention to obligate federal funds at a later date when funds become available. Finally, Early Systems Work Agreements obligate a portion of a project’s federal funding so that project sponsors can begin preliminary project activities before a full funding grant agreement is awarded. FTA has employed a contractor to determine whether risk is effectively transferred from the public to private sector for its pilot program projects, and will consider private sector due diligence as a substitute for its own. From a public perspective, an important component of analyzing the potential benefits and limitations of greater private sector involvement is consideration of the public interest. Although, in transportation, no definition of public interest exists at the federal level, nor does federal guidance identify public interest considerations in transportation, consideration of the public interest in transit may refer to the many stakeholders in public-private partnerships, each of which may have its own interests. Stakeholders include public transit authorities, transit agency employees, mass transit users and members of the public who may be affected by ancillary effects of a transit public-private partnership or alternative project delivery approach, including users of bus and highways, special interest groups, and taxpayers in general. Moreover, defining the public interest is a function of scale and can differ based on the range of stakeholders in addition to the geographic and political domain considered. For the purposes of its pilot program, FTA has stated that the public interest refers to the due diligence that FTA typically conducts as a public entity with a financial interest in a transit project. In the United States, the private sector has played a more limited role in the delivery and financing of transit projects than in some other countries. Since 2000, seven New Starts projects were completed using alternative approaches (see table 1). These projects have focused on delivery, rather than financing, and have used either the design-build or the design-build- operate-maintain delivery approach, in which the private sector role is to design and construct the project or to design, construct, operate, and maintain the project, respectively. In addition, to date, no completed New Starts projects have been privately financed and therefore, none of these projects have used private equity financing. However, there have been very few examples of completed non-New Starts-funded new fixed-guideway projects that have been privately financed. One project, the Las Vegas Monorail, a 4-mile fixed-guideway system serving the resort corridor along Las Vegas Boulevard in Nevada, was financed with tax-exempt revenue bonds issued through the state of Nevada and with contributions from the area resorts and hotels. As previously mentioned, Congress authorized FTA to establish its Public- Private Partnership Pilot Program to demonstrate the advantages and disadvantages of these approaches in transit. As established, the pilot project studies those projects that use alternative approaches that integrate a sharing of project risk and incorporate private equity capital in order to illustrate where FTA can grant greater flexibility of some of its New Starts requirements to projects within the pilot program. However, to date, only one of the pilot projects is expected to incorporate private equity capital. FTA designated three project sponsors for its Public-Private Partnership Pilot Program in 2007: Bay Area Rapid Transit—The Oakland Airport Connector project is to be a 3.2-mile system that will connect the Oakland International Airport to the Bay Area Rapid Transit’s Coliseum Station and the rest of the transit system. In its original iteration, the Oakland Airport Connector planned on using a design-build-finance-operate-maintain project delivery approach that included private sector financing. However, lower-than-expected ridership predictions due to the economic climate, among other factors, led Bay Area Rapid Transit to move forward with a different alternative approach for its project— now design-build-operate-maintain—and undergo a new request for qualified bidders and request for proposals process. According to Bay Area Rapid Transit, a contract will be awarded in December 2009. Metropolitan Transit Authority of Harris County (Houston Metro)—North and Southeast Corridor projects are to provide improved access to Houston’s Central Business District. This project was also originally to use a design-build-finance-operate-maintain approach that included private sector financing, but no bidders on the project proposed an equity investment, so it is instead using a design- build-operate-maintain approach. Issues related to price and risk transference led Houston Metro to switch private partners and the new partner chose not to provide financing for the project. Groundbreaking for the construction of the two projects occurred in July 2009. Denver Regional Transportation District—East Corridor and Gold Line pilot projects are to connect the city’s main railway station with its airport and other parts of the city. The project is using a design- build-finance-operate-maintain approach, which includes financing by the private sector partner. The private sector partner will be selected through a competitive proposal process to deliver and operate the project under a long-term agreement. In September 2009, Denver Regional Transportation District released a request for proposals to prequalified teams. One ongoing New Starts project did not apply to be part of the pilot program, but is using an alternative approach. The Dulles Silver Line is using the design-build approach with partial funding of the local share coming from area businesses generated through a tax-increment financing district to connect Washington, D.C., metropolitan area’s transit system with one of the area’s three major airports. In contrast, international project sponsors have delivered transit projects using a wider range of alternative approaches, including public-private partnerships, beyond the more commonly used design-build in the United States (see table 2). According to World Bank officials and a World Bank- sponsored report, transit public-private partnerships have been implemented in Australia, Brazil, Canada, France, Hong Kong, Malaysia, the Philippines, South Africa, Thailand, and the United Kingdom. Furthermore, international project sponsors have incorporated private equity investment financing for some of their projects. According to World Bank officials, the United Kingdom and Canada are leading countries for private equity investment in transit, and the United Kingdom has the most experience using different public-private partnership models. International projects also generally require a government subsidy to supplement farebox revenues for construction as well as operations and maintenance. Examples of several projects in the United Kingdom and Canada that we reviewed include the following: The Docklands Light Railway serves a redevelopment area east and southeast of London. Transport for London, the public sector project sponsor, used three separate design-build-finance-maintain concession agreements to construct system extensions as well as a single franchise to operate trains over the entire system. All three extensions were financed in part or full using private equity investment, and the Lewisham Extension was the United Kingdom’s first transportation public-private partnership for both project delivery and financing. The Croydon Tramlink light rail project was a 99-year design- build-finance-operate-maintain agreement to develop the new system. In this project, payments to the private sector partner during operations were based entirely on ridership revenue, but the project sponsor retained the authority to set fares. The private sector partner faced financial difficulties, and the concession was ultimately bought by Transport for London. The Manchester Metrolink Phase II light rail project was a 17- year concession agreement wherein the private partner had responsibility to design, construct, finance, operate, and maintain this project. The project was designed to expand the Metrolink System in order to connect two of the city’s existing stations. The private partner provided over one-half of the project’s funding for construction. The public sector terminated the concession to further expand the system. The London Underground maintenance projects included agreements entered into between London Underground and two private sector partners to maintain and upgrade the system’s infrastructure, including track, tunnels, trains, and stations. In return, the private sector would receive periodic payments based on its performance. One of the two private sector partners subsequently went bankrupt, and the concession agreement was then taken over by Transport for London. The Nottingham Express Transit light rail project used a 27-year contract to design, build, finance, operate, and maintain a new transit line. Payments to the private sector were based on performance and ridership revenue, meaning that the private sector assumed some risk that actual ridership would not reach forecasted levels. Along with this transfer of risk, the private sector was also given the ability to set fares. The project is in the ninth year of its contract. The Canada Line light rail project in the Vancouver area is a 35- year design-build-finance-operate-maintain concession agreement developed to link Vancouver with its international airport and neighboring employment and population centers in anticipation of the 2010 Winter Olympics. A separate entity was created to oversee the project’s development and the private partner provided one-third of the project’s funding, including private equity capital, in exchange for periodic payments based on performance and ridership. FTA’s pilot program is expected to demonstrate potential benefits to using alternative approaches in transit. Project sponsors we interviewed cited a range of potential benefits, such as achieving cost and time savings, as well as potential advantages to the public sector, such as increased financing flexibility (see table 3). DOT outlined some of these same benefits and advantages in its 2007 Report to Congress on transit public- private partnerships and we similarly reported on them in 2008 for highway public-private partnerships. However, as we said then, benefits are not assured and should be evaluated by weighing them against potential costs and trade-offs. Among the benefits from using alternative approaches, project sponsors told us that they may better meet cost and schedule targets as well as achieve cost and time savings by transferring risks to the private sector. With transit projects that use alternative approaches, project sponsors can transfer a range of key project risks to the private sector, such as those related to design and its effect on construction that would normally be borne by the project sponsor, so that the private sector is accountable for errors or nonperformance. By transferring these project risks, the project sponsor creates incentives for the private sector to keep the project on schedule and on budget as, for example, the private sector would be responsible for any excess costs incurred from design errors. In addition, when a project sponsor transfers multiple project risks to the private sector, it can potentially reduce the total cost and duration since a single contractor can concurrently perform project activities that would typically be carried out consecutively by multiple contractors under the conventional design-bid-build approach. Project sponsors, stakeholders, and transit experts we interviewed told us that potential cost and time savings can be key incentives for using alternative approaches. For example, FTA reported that Minnesota Metro Transit’s Hiawatha Corridor (one of the seven completed New Starts projects that used an alternative approach) was completed 12 months ahead of schedule compared to using the conventional design-bid-build approach by allowing design and construction schedules to overlap. This saved an estimated $25 million to $38 million since early completion led to avoided administration costs using a design-build alternative approach. Denver Regional Transportation District and the private sector completed the Transportation Expansion project 22 months ahead of schedule and within budget. In the United Kingdom, the three Docklands Light Railway extensions were built using design-build-finance-maintain approaches, and were completed 2 weeks to 2 months ahead of schedule. However, the use of alternative approaches does not guarantee cost and schedule benefits. For example, the design-build approach used by the South Florida Commuter Rail Upgrades saved 4 to 6 years by completing all upgrades as a single project, but incurred slightly higher costs than estimated for the conventional design-bid-build approach. Project sponsors may be able to benefit from certain efficiencies and service improvements by transferring long-term responsibility of transit operations and maintenance in addition to design and construction to the private sector. DOT’s 2007 Report to Congress on transit public-private partnerships stated that the private sector may be able to add value to transit projects through improved management and innovation in a project’s construction, maintenance, and operation. Project sponsors and stakeholders we interviewed stated that alternative approaches promote the use of performance measures (such as train capacity and frequency) rather than specific design details (such as the type of train). This allows the private sector to potentially generate and apply innovative solutions in the design of the transit system, adding value to the project. For example, because Denver Regional Transportation District’s Transportation Expansion Light Rail project (another of the seven New Starts projects) used a design-build approach, a lessons-learned report following the project’s completion stated that the project sponsor was able to incorporate 198 design modifications identified by the private sector partner during development to improve overall quality of the transit system while remaining on budget. A conventional design-bid-build contract is generally not flexible enough to allow for such design modifications without additional costs because contracts often specify the use of technical or other specifications. When the long-term responsibilities of transit operations and maintenance are transferred, the private sector potentially has a greater incentive to make efficient design decisions. This is because the private sector can be held responsible for the condition of a transit project for longer durations than under the conventional design-bid-build approach. Houston Metro officials told us that for an earlier project that used the conventional design-bid-build approach, the project’s warranty terms did not hold the construction firm responsible long enough to cover defects such as faulty track and concrete. As a result, Houston Metro had to file claims to remedy these defects. Houston Metro officials stated they chose to build its North and Southeast Corridor pilot project using design-build-operate- maintain contract in part to hold the private sector entity responsible for the quality of the project’s construction for a longer period of time. A greater private sector role in transit projects can also potentially offer certain advantages to the public sector, including increased financial flexibility and more predictable operations and maintenance funding. For example, Denver Regional Transportation District officials said that they will make payments tied to operations to the private sector over a number of years to, in part, pay for the private sector’s partial financing for the East Corridor and Gold Line pilot projects. By using the design-build- finance-operate-maintain approach, Denver may have more financing flexibility by potentially extending the payments 20 years longer than if a bond were used and the private sector were not involved in financing the project. With a longer payment period, project stakeholders told us that the transit agency could conserve funds in the short term to help it construct other new transit projects on time. Additionally, alternative approaches may help ensure more predictable funding for maintenance and operations since these activities can be subject to unpredictable public sector budget cycles under the conventional design-bid-build approach. Because alternative approaches for transit projects may include operations and maintenance standards in the contract, the private sector might be responsible to fund these activities within the overall contract price. FTA’s pilot program is also expected to demonstrate the potential limitations to using alternative approaches in transit, including some of those addressed in DOT’s 2007 Report to Congress on transit public- private partnerships (see table 4). One limitation is that some project risks should not be transferred to the private sector. For example, it may be too costly for project sponsors to transfer certain risks, such as ridership and environmental remediation, because the private sector may want to charge an additional premium to take them on. Ridership risk refers to whether the actual number of passengers achieves forecasted levels. According to officials we interviewed, environmental remediation risk refers to whether the cleanup of hazardous materials and other conditions at a project site leads to increased project costs or schedule delays, and can encompass conditions that are identified as well as those that are not identified during surveys of a project site. Past experience in projects demonstrates the difficulty of transferring these risks to the private sector. According to officials we interviewed, ridership risk may be difficult to transfer to the private sector if transit project sponsors are reluctant to forfeit full fare-setting authority. For example, Denver Regional Transportation District chose not to transfer ridership risk for its East Corridor and Gold Line pilot projects given that it wanted to retain the right to set fares in order to keep fares uniform systemwide. Another example is the United Kingdom’s Croydon Tramlink project, which transferred ridership risk but not the ability to set fares. Officials we interviewed stated that the private partner progressively faced financial difficulties due to low ridership revenue, which led to the collapse and ultimate buyback of the partnership by Transport for London. Additionally, if a transit project is built as an extension of an existing system, the private sector partner may not want to operate a single segment of a publicly owned system. According to officials, private investors are reluctant to assume ridership risk of any portion of a system operated by an entity they do not control. These officials said that in many cases, the private sector partner would need the authority to increase or decrease transit fares based on ridership trends and the number of transit users to assume greater ridership risk. However, because raising fares involves political considerations, including equity for low-income transit users, officials told us that most project sponsors retain the right to set fares and are unwilling to forfeit fare-setting control. Some project sponsors that have tried to transfer ridership risk while retaining fare-setting authority have run into difficulties. According to project sponsors and transit experts, the Bay Area Rapid Transit’s Oakland Airport Connector project initially tried but ultimately was unable to transfer ridership risk in part because the private sector concessionaire (under the project’s original iteration) would not have fare-setting authority. This was also the case with the Canada Line, where the agreement was structured to incorporate a limited transfer of ridership risk to the private sector partner. Although the project sponsor wanted to transfer full ridership risk to the concessionaire, it learned that private investors would not finance a deal with full ridership risk transfer due to their inability to control factors that influence ridership such as transit fares. As such, the project sponsor decided to transfer limited ridership risk to the private sector by basing 10 percent of its payments to the private sector partner during operations and maintenance on ridership figures. According to project sponsors, this transfer of ridership risk was done to induce the concessionaire to increase ridership by providing quality customer service. Officials we interviewed also stated that environmental remediation risks may be difficult to transfer to the private sector because of the additional premium the private sector charges to address unknown factors. Denver Regional Transportation District originally planned to transfer all environmental remediation risk for its East Corridor and Gold Line pilot projects’ long-term design-build-finance-operate-maintain concession. This caused the private sector to estimate a $25 million charge for taking on this risk, according to Denver Regional Transportation District officials we interviewed. When the project sponsor decided to retain one aspect of the environmental risk related to several unknown remediation elements, the private sector dropped the cost estimate of transferring the remaining environmental risk from $25 million to $9 million. Moreover, as we have previously reported regarding highway public-private partnerships, it may be inefficient and inappropriate for certain risks to be transferred to the private sector due to the costs and risks associated with environmental issues. Permitting requirements and other environmental risks may become too time-consuming and costly for the private sector to address and may best be retained by the public sector given its stewardship role within the government. According to officials we interviewed, although the Canada Line’s concession agreement transferred all key construction risks (i.e., cost overruns) to the private sector, the public authority retained risks associated with permitting and other environmental risks such as unknown contaminated soils. Further, for one early highway public- private partnership in California, the project sponsor attempted to transfer environmental permitting risk to the private sector. However, the private sector partner spent more than $30 million dollars over a 10-year period and never obtained final approval to proceed with construction. Another potential limitation in transit projects that use alternative approaches is the project sponsor’s loss of control and reduced flexibility in transit operations. Because the transit project sponsor enters into a contractual agreement that gives the private partner a greater decision- making role, the project sponsor may lose some control over its ability to modify existing assets or implement plans to accommodate changes over time such as extensions, service changes, and technology upgrades. For example, in the United Kingdom, the project sponsor for Manchester Metrolink had to break two existing public-private partnership concession agreements to accommodate extensions to its system. Consultants to the Manchester project told us that breaking a concession agreement can be very expensive and can damage the relationship between the project sponsor and the private sector partner. Similarly, to accommodate increased ridership, the project sponsor for Docklands Light Railway decided to build platform expansions. However, the private sector partner was not willing to take on this additional work, requiring the project sponsor to take the extra steps to hire another party to build the platform extensions and negotiate the handover of the platforms to the private sector partner for maintenance. Transit projects that use alternative approaches may also introduce transaction costs to the project sponsor through legal, financial, and administrative fees in addition to higher-priced financing in cases where the transit project is privately financed. According to officials we interviewed, transit public-private partnerships often require the advisory services of attorneys, financial experts, and private consultants to successfully execute the steps necessary to finalize the project’s agreement. These additional services and transaction fees represent additional public sector costs that the conventional project delivery approach may not necessarily require. For example, the project sponsor for the London Underground spent the equivalent of $112 million or approximately 1.1 percent of the concession agreement’s total price to cover legal expenses, financial services, and administrative fees. Officials we interviewed also stated that Denver Regional Transportation District anticipates spending $15 million in advisory fees for its East Corridor and Gold Line pilot projects’ request for proposals submittals. In addition to transaction costs, public-private partnerships incur added costs when the private sector provides the financing for the project. The municipal bond market in the United States generally provides public transit agencies a cheaper source of funding because they can borrow more cheaply than the private sector. Officials also stated that the effects of the recent economic recession and failed credit markets have stymied the private sector’s ability to raise revenues and provide affordable long-term debt for large transit projects due to tight lending conditions. While we have previously identified FTA’s New Starts grant program— which funds new, large-scale transit projects—as a model for other federal transportation programs because of its use of a rigorous and systematic evaluation process to distinguish among proposed investments, the New Starts project approval process is not entirely compatible with transit projects that use alternative approaches in that the process is sequential and phased with approvals granted separately and at certain decision points. Therefore, the New Starts process serves as a potential barrier because transit projects that use alternative approaches often rely on the concurrent completion of project phases to meet cost and schedule targets and to accrue savings and other potential benefits. Congress recognized New Starts as a potential barrier, as it authorized FTA to establish a Public-Private Partnership Pilot Program in part to identify ways to streamline the process. According to DOT’s 2007 Report to Congress as well as project sponsors, their advisors, and private sector partners, the New Starts project approval process, while appropriate for the type of transit projects that have been developed over several decades, poses particular challenges for project sponsors using alternative approaches for their transit projects. The challenges they raised include (1) delays, (2) additional costs, and (3) the loss of other potential benefits, such as enhanced efficiencies and improved quality. The sequential and phased New Starts project approval process can create schedule delays as project sponsors await federal approval. The amount of time it takes for FTA to determine whether a project can advance can be significant. A 2007 study on the New Starts program by Deloitte, commissioned by FTA to review the New Starts process and identify opportunities for streamlining or simplifying the process, found that the New Starts process is perceived by project sponsors as intensive, lengthy, and burdensome. The Deloitte study found that FTA’s prescribed review times of 30 and 120 days for entry into the preliminary engineering and final design phases, respectively, are apparently arbitrary and actual review times are generally longer. In particular, the study found that FTA’s risk-assessment process delayed project development. Consultants to the Dulles Silver Line project sponsor told us that through the New Starts process, FTA has complete control over a project’s schedule, and project sponsors have to put project work on hold while waiting for FTA’s approval to advance into the next project phase. They also told us that construction activities on the Dulles Silver Line could not begin until the approval of a full funding grant agreement—as design and construction activities cannot be completed at the same time—and so some of the time- savings benefits of the design-build approach were lost. For the East Corridor and Gold Line pilot projects, Denver Regional Transportation District officials also told us that since enough design work will be completed during the New Starts preliminary engineering phase to request bids from the private sector, no additional design work is needed during final design and construction of the project. However, Denver officials said that, as required by New Starts, they will again prepare the design documentation for the final design and full funding grant agreement approval phases, potentially contributing to schedule delays. FTA officials told us that the resubmission of the documentation is necessary because the private sector can bid to provide something different than what was agreed upon under preliminary engineering. Houston Metro’s private sector partner told us it would like to begin some construction activities on the North and Southeast Corridors, but will not be able to begin until a full funding grant agreement is awarded. As a result, the private sector partner has to delay its work until the funding process is completed. FTA officials responded that they allowed Houston Metro to carry out some construction activities in advance of their receiving a full funding grant agreement. Moreover, Houston Metro officials told us that FTA required them to submit and resubmit entire project documents to FTA multiple times, which led to delays. FTA officials told us the length of time for reviews depends on a number of factors, most importantly the completeness and accuracy of the project sponsor’s submissions, and that project sponsors could help to avoid such delays by improving their submissions. For example, FTA officials stated that Houston Metro’s projects have changed repeatedly, thus requiring multiple submittals. In addition to the costs of delays, the design of the New Starts project approval process—which is closely aligned with the conventional design- bid-build approach—may also contribute to additional project costs borne by the public sector when other alternative approaches are used. Project sponsors and other stakeholders for Denver Regional Transportation District’s East Corridor and Gold Line pilot projects told us that the private sector must maintain its financial commitment to a project for up to several months to allow for FTA, Office of Management and Budget, and congressional review of the full funding grant agreement. For example, Denver Regional Transportation District officials anticipate adhering to the sequential and phased New Starts approach to its project in order to accommodate delays from waiting for the reauthorization of the existing transportation bill, the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users, and awarding a full funding grant agreement for the project. However, Denver Regional Transportation District officials told us that following this approach will likely increase the cost of the project. FTA officials told us that these additional costs stem from a lack of funding available in a surface transportation authorization period rather than FTA’s New Starts requirements. Additionally, for the Dulles Silver Line, tax-increment financing funding— funding from incremental tax revenue increases generated by new construction or rehabilitation projects around the new transit line—was a major funding source for the project, contributing up to $400 million to the $2.6 billion project. The Duller Silver Line project consultants told us that the project risked losing the tax increment financing funding as it took 5 years to receive a full funding grant agreement when the project sponsor originally estimated that it would take 2 to 3 years. FTA officials stated that several factors, including the decision to reexamine a tunnel option, contributed to challenges surrounding the Dulles Silver Line. FTA’s New Starts project approval process may also limit other potential benefits, such as enhanced efficiencies and design improvements, when transit projects use alternative approaches. For example, Denver Regional Transportation District officials told us that the New Starts project approval process requires that specific design details be included and that this requirement can prohibit a project sponsor from instead leaving such design specifications to the private sector, thus possibly limiting the ability to find innovative and cost-effective solutions for the project. When a project sponsor specifies the exact number of vehicles for the project, the private sector partners must incorporate that design detail into their scope, whether or not that exact number of vehicles is really needed. Due to the New Starts requirements, another project sponsor told us that it had been discouraged from using an alterative project delivery approach again after having what it believed to be a prior successful experience that included enhanced efficiencies and design improvements. A Minnesota Metro Transit official told us it initially wanted to use the design-build approach for its ongoing Central Corridor project based on the success of previously using this approach for the Hiawatha Corridor—a completed New Starts project that received a full funding grant agreement in 2000. However, Minnesota Metro Transit determined that it would have to complete 60 percent of the Central Corridor project’s design to meet FTA’s New Starts requirements for final design. DOT’s 2007 Report to Congress also cited a similar challenge regarding project design requirements. These requirements are not consistent with alternative approaches where project sponsors look to involve the private sector after only one-third, for example, of the design work is completed. Therefore, Minnesota Metro Transit decided to use the conventional design-bid-build approach to construct the project. In commenting on a draft of our report, FTA officials recognize that while additional steps could be taken to facilitate alternative approaches to transit projects, they also believe that other barriers beyond the federal approval process affect the use of these approaches, including those beyond the immediate reach of the program such as reduced available private equity capital resulting from the recent economic recession. To address these challenges of the New Starts project approval process for transit projects that use alternative approaches, Congress and FTA have taken steps to streamline New Starts by establishing the Public-Private Partnership Pilot Program. And to date, FTA has agreed to provide all three of the pilot program project sponsors with some level of relief, including expediting its risk assessment and providing Letters of No Prejudice earlier than traditionally allowed in the New Starts process to Houston Metro, and granting a waiver from federal performance bonding requirements to the Bay Area Rapid Transit Oakland Airport Connector pilot project, which FTA has also done for non-pilot program projects. FTA has also stated its amenability to waiving its risk assessment—which aims to identify issues that could affect a project’s schedule or cost—and financial reviews, concurrently approving the project into the New Starts final design phase while awarding an Early Systems Work Agreement for Denver Regional Transportation District’s East Corridor and Gold Line pilot projects. However, because FTA officials told us that none of the pilot projects has demonstrated a sufficient transfer of risk or financial investment by the private sector to enable FTA to relax its normal New Starts evaluation requirements for such approvals, FTA has yet to grant three pilot project sponsors any major streamlining modifications of the New Starts project approval process, such as the awarding of concurrent approvals into the New Starts phases. Thus far, FTA has only assessed the Houston Metro pilot project to determine the extent to which FTA could streamline the New Starts process. In its November 2008 report, FTA determined that it would not relax, modify, or waive its risk assessment and financial capacity reviews prior to advancement into final design because Houston Metro retains risks in a number of critical risk areas including finance since there is no equity capital investment by the private sector partner. Houston Metro officials said that they considered transferring more risk to the private sector to meet FTA’s threshold to waive certain New Starts evaluation requirements, but decided against doing so because of their concern that the private sector assuming certain risks to meet FTA’s threshold may potentially increase private sector bids and that they would still be able to achieve some of the benefits of using an alternative approach without equity capital investment by the private sector. While it may be too early for FTA to grant major streamlining modifications with the other two pilot projects, FTA still has the ability as part of its pilot program to further experiment with the use of existing tools that could encourage a greater private sector role while continuing to balance the need to protect the public interest. FTA has the ability to use conditional approvals in the New Starts process, such as (1) Letters of Intent that announce FTA’s intention to issue a full funding grant agreement that would in turn agree to obligate a New Starts project’s full federal share from future available budget authority, subject to the availability of appropriations, provided that a project meets all the terms of a full funding grant agreement and (2) Early Systems Work Agreements that obligate only a portion of a New Starts project’s federal share for preliminary project activities, such as land acquisition. Over the past 30 years, FTA has made very limited use of these tools by only granting three Letters of Intent and four Early Systems Work Agreements to transit projects. The Deloitte study noted that New Starts project sponsors miss the opportunity to use alternative methods including design-build and design-build-finance-operate-maintain because of the lack of early commitment of federal funding for the projects, suggesting that the greater use of these tools could be beneficial. However, use of these tools is not without risk. We have previously noted that limitations to FTA making greater use of these tools, including Letters of Intent, could be misinterpreted as an obligation of federal funds when they only signal FTA’s intention to obligate future funds. Furthermore, Early Systems Work Agreements require a project to have a record of decision for the environmental review process that must be completed under the National Environmental Policy Act and require the Secretary to find that a full funding grant agreement for the project will be made and that the agreement will promote more-rapid and less-costly completion of the project. Finally, under current statute, both of these tools—Letters of Intent and Early Systems Work Agreement—count against FTA’s available funding for New Starts projects under the current surface transportation authorization. We found that the governments of the United Kingdom and Canada use conditional approvals to help encourage a greater private sector role in transit projects. The United Kingdom’s Department for Transport grants a conditional approval announcing the government’s intent to fund a project before it receives private sector bids provided that cost, risk transference, and scope do not change. If those conditions are not met, the project loses its government funding. This conditional approval occurs after the department reviews projects, in part to address the risk of cost increases, and thus provides a signal of project quality to the private sector to help maintain a competitive bidding process. Similarly, Transport Canada officials told us that it makes a formal announcement to state its intent to provide federal funds to a transit project after conducting its initial review of a project and before formally committing funds that allow project sponsors to move forward in development and engaging the private sector. If the agreed-upon cost, schedule, and risk transference are not met, the government withdraws its funding. United Kingdom Department for Transport officials told us that they have experience withdrawing funding when such conditions have not been met. We also found that other U.S. Department of Transportation modal administrations use similar conditional approvals to help encourage greater private sector involvement in projects. The Federal Aviation Administration uses Letters of Intent in its Airport Improvement Program to establish multiyear funding schedules for projects that officials said allow project sponsors to proceed with greater certainty regarding future federal funding compared to the broader program and also help prevent project stops and starts. The Federal Aviation Administration has granted 90 of these multiyear awards since 1988. The Federal Highway Administration grants early conditional approvals to highway project sponsors seeking Transportation Infrastructure Finance and Innovation Act funds to streamline the process and allow private sector bidders to incorporate these funds into their financial plans without having to individually apply as otherwise required. The Federal Highway Administration has also carried out three pilot programs that have allowed projects to move more efficiently through its grant process by modifying some of its requirements. These pilot projects waived certain aspects of the federal-aid highway procurement provisions, such as moving forward with final decision prior to a National Environmental Policy Act decision, and allowed federally funded highway projects to use alternative approaches including design-build. One of these pilot programs is cited by the Federal Highway Administration as having helped pave the way for design-build to become the standard project delivery approach in highway projects. Another pilot program allowed the Federal Highway Administration to waive regulations and policies so project sponsors in two states could contract with the private sector at a much earlier point in the project development cycle than was previously allowed. In addition to not yet granting project sponsors any major streamlining modifications to the New Starts process, FTA does not have an evaluation plan to accurately and reliably assess the pilot program’s results, including the effect of its efforts to streamline the New Starts projects for pilot project sponsors. We have previously reported that to evaluate the effectiveness of a pilot program, a sound evaluation plan is needed and should incorporate key features including: well-defined, clear, and measurable objectives; measures that are directly linked to the program objectives; criteria for determining pilot program performance; a way to isolate the effects of the pilot program; a data analysis plan for the evaluation design; and a detailed plan to ensure that data collection, entry, and storage are reliable and error-free. Without such an evaluation plan, FTA is limited in its decision making regarding its pilot program, and Congress will be limited in its decision making about the pilot program’s potential broader application. FTA officials told us that they have not yet developed an evaluation plan for its pilot program given that the projects are all ongoing, far from completion, and still working through the New Starts project approval process. The alternative approaches we reviewed have protected the public interest in various ways to ensure the public receives the best price for a project and to create incentives for the private sector partner so that the project progresses and operates based on agreed-upon objectives. Project sponsors we interviewed have attempted in part to protect the public interest in transit projects that use alternative approaches by ensuring the use of competitive procurement practices. These practices are not unique to alternative approaches and are sometimes used in conventional procurements. Competitive procurement practices are generally required to be used for federal funding. For example, federal law and regulations generally require federal contracts to be competed unless they fall under specific exceptions to full and open competition. Nevertheless, project sponsors told us that maximizing the use of these competitive procurement practices—such as encouraging multiple bidders to value and price projects—helps to ensure that the public sector receives the best bid when using these partnerships and approaches. European Union countries are required to have multiple bidders for procurements. Procurements with only one bidder are less competitive and can result in less attractive bids. For example, although Bay Area Rapid Transit prequalified three contractors for the first version of its Oakland Airport Connector, two contractors withdrew during the negotiation period due to concerns about the project affordability. Bay Area Rapid Transit negotiated with the sole remaining bidder on costs for nearly a year but then let the Request for Proposals expire with no proposals submitted. To encourage the participation of multiple bidders, Minnesota Metro Transit Hiawatha Corridor and Denver’s Regional Transportation District’s Transportation Expansion light rail offered proposal stipends to private sector entities that submitted formal bids to help defray the costs of developing proposals. However, while serving as an incentive for potential private sector partners, stipends add costs that must be weighed against the benefits they provide. Project sponsors that we interviewed have also encouraged early and sustained interaction with the private sector to test the project’s marketability and whether and in what form private sector participation is advantageous. Such feedback can be obtained through bidder information sessions and from consultants. Project sponsors then conduct a request for qualified bidders to gain more detailed input from the private sector on a project prior to the issuance of a request for proposals (which solicits the formal bids). The request for qualified bidders can establish a higher threshold of responsibility for private partners compared to traditional procurements in which a private partner is selected based primarily on bid price. Thus, sustained and iterative interaction between the project sponsor and the private sector can refine the project’s scope and terms and determine how best to include the private sector. For example, all three of FTA’s pilot projects as well as Minnesota Metro Transit’s Hiawatha Corridor project used a request for qualifications to select bidders and solicit the private sector’s review of project details. In addition, Minnesota Metro Transit told us that input from the private sector produced several good ideas that were incorporated into the project, such as a shared risk fund to provide an incentive for the private sector to reduce construction delays. Furthermore, the Canada Line project sponsor used a list of essential elements agreed upon by the public agencies funding the project as a basis for negotiating with potential bidders. Project sponsors that we interviewed seek to protect the public interest in alternative approaches through an emphasis on performance. Performance specifications focus on desired project performance (such as frequency of train arrivals at a station) and not design details (such as the type of train). Project sponsors and consultants told us that detailed specifications that have been in conventional project delivery approaches can restrict what bidders can offer. When specifications are focused on performance, bidders can offer a range of design and technology options as well as follow best practices that meet overall project objectives. According to Denver’s Regional Transportation District, the East Corridor and Gold Line pilot projects initially had a 700-page design specification document for their commuter rail vehicles. After industry review and feedback that the specifications would lead to customized vehicles that would be expensive and difficult to operate and maintain, the project sponsor responded by creating a 15-page performance specifications document for the vehicles. An advisor to the project sponsor noted that the use of design specifications is more challenging with transit projects than in highways and other sectors given the technology issues and environmental concerns. The advisor also said that projects with a range of technology options must undergo the environmental review process at the highest possible level of design given the effect of different technologies on the environment. In contrast, one project sponsor noted that performance specifications should not be used when conditions of the facility or surrounding environment, for example, are unknown as unforeseen circumstances could occur that would require more specific design specifications. Project sponsors we interviewed have also sought to use performance standards to protect the public interest. These standards are what the private sector partner must meet to be compensated during the project’s construction, operations, and maintenance phases, helping to ensure adequate performance. If the private sector partner does not meet the standards, then it is penalized with no, reduced, or delayed payments, and penalties can escalate if poor performance continues. Standards for construction include delivering a completed project or project element within a set schedule. For example, the Canada Line private sector partner had 400 milestones that it needed to complete and have certified in order to continue to receive timely payments during the project’s construction period. Performance standards for operations and maintenance, also called key performance indicators, cover all aspects of service including the availability, frequency, and reliability of service and conditions of facilities. For example, the London Underground chose to emphasize key performance indicators in four areas—availability, capability, ambience, and service points—by creating performance targets and to tie monthly payments to these based on the private sector partner’s actual performance. Some projects have also incorporated standards that link to increased ridership to provide incentives for the private sector partner to provide good customer service. For example, Nottingham Express Transit has 20 percent of its payments to the private sector based on ridership. Additionally, the draft concession agreement for Denver’s Regional Transportation District East Corridor and Gold Line pilot projects incorporate levels of payment deductions that accelerate when low performance, such as delayed trains and littered or unclean railcars, persists. If low performance continues over a period, the project sponsor can terminate the concession agreement and rebid the project to another private partner. Project sponsors we interviewed also protect the public interest in transit public-private partnership and other alternative approaches through the incorporation of private equity capital. When a private sector partner finances a project using equity capital, the private sector uses payments received from the project sponsor to repay its costs plus provide a return on investment. Since the private sector partner borrows to finance its costs—that is, it has equity at risk if it does not meet standards—it will be unable to meet its financial obligations from these milestone payments if those standards are not met. This situation can create incentives for the private sector partner to deliver according to the terms of the agreement. At the same time, financial advisors to project sponsors told us that bank lenders protect their investments by ensuring that the private sector properly develops a concession agreement and then delivers on it. The public interest is thus further protected by this integration of responsibilities because the bank lender and concessionaire provide additional project oversight through the monitoring of cost overruns and schedule delays, among other issues. According to the Canada Line private sector partner, it provided 17 percent equity in the project. For the Croydon Tramlink, the private sector partner contributed 30 percent of project costs. In the case of the Canada Line, the private sector partner did not miss any of its 400 payment milestones. To better protect the public interest, project sponsors have also incorporated clauses into project agreements that allow for flexibility under certain circumstances. Project sponsors that we interviewed noted the importance of having the ability to periodically revisit agreement terms in long-term concessions to protect the public interest given that unforeseen circumstances may occur that make the concessionaire unable to meet performance standards. For example, Houston Metro’s North and Southeast Corridor projects’ concession agreement incorporated this flexibility by including an operations and maintenance agreement for the first 5 years after service begins with the option for renewal. According to a consultant that works on the project, this approach was chosen in part because the project sponsor wanted an option to revisit the contract. Internationally, both of the London Underground’s maintenance 30-year concession agreements are reviewed for scope of work and costs by a public-private partnerships arbiter every 7.5 years. Moreover, the concessionaire has the ability to request an extraordinary review by the arbiter if costs rise above a specified threshold due to circumstances outside the private sector partner’s control. Periodically revisiting terms, or shorter concession periods, can also allow for changes such as system extension. One of the Docklands Light Railway extensions has breakpoints at the years 2013 and 2020 in its concession agreement that give the project sponsor an option to break and buy back the agreement for a set price. In contrast, in the previously mentioned example of Manchester Metrolink, concessions for phase 2 were terminated by the project sponsor to allow for system expansion in a third phase which was not procured as a public-private partnership. According to consultants we interviewed, the terminations could have been avoided if the initial concessions had been shorter. Shorter concession periods are thus being used as a means to revisit terms and rebid if desired. In addition to clauses that allow project sponsors to revisit concession agreement terms, other clauses that allow for flexibility can also protect the public interest. For example, Denver Regional Transportation District’s draft concession agreement includes clauses specifying both triggers that could lead to default and terms of compensation in case of default as well as termination provisions that detail the condition of the transit asset at the end of the concession when it is transferred back to the project sponsor. These provisions help to minimize disputes. Other advisors to project sponsors told us that a clause specifying the sharing of “refinancing gains” between the project sponsor and concessionaire could also help to protect the public interest. Refinancing gains refer to savings that occur when the private sector revises its repayment schedule for its equity investment by taking advantage of better financial terms. As we have noted in our report on highway public-private partnerships, the private sector can potentially benefit through gains achieved in refinancing their investments and these gains can be substantial. The governments of the United Kingdom as well as Victoria and New South Wales, Australia, require that any refinancing gains achieved by private concessionaires generally be shared with the government. Some foreign governments have recognized the importance of protecting the public interest in public-private partnerships through the use of quantitative and qualitative public interest assessments. We have also previously reported that more rigorous, up-front analysis could better secure potential benefits and protect the public interest. The use of quantitative and qualitative public interest tests and tools before entering into transit public-private partnerships can help lay out the expected benefits, costs, and risks of the project. Conversely, not using such tools can potentially allow aspects of the public interest to be overlooked. For example, a Value for Money analysis is a tool used to evaluate if entering into a project as a public-private partnership is the best project delivery option available. Internationally, the United Kingdom, and British Columbia in Canada, among others, require a Value for Money analysis for all transportation projects over a certain cost threshold. For example, all transportation projects in the United Kingdom that exceed about $24 million must undergo a Value for Money analysis to receive project funding, while projects in British Columbia must conduct a Value for Money analysis if project costs total more than about $46 million. Domestically, Florida requires a Value for Money analysis for public- private partnerships, one of which was recently conducted on the I-595 Corridor Roadway Improvements Project in Broward County. A Value for Money assessment was also completed for the Bay Area Rapid Transit’s Oakland Airport Connector at the request of FTA. In general, Value for Money evaluations examine total project costs and benefits and are used to determine if a public-private partnership approach is in the public interest for a given project. Value for Money tests are often done by comparing the costs of doing a proposed project as a public-private partnership against an estimate of the costs of procuring that project using a public delivery model. Value for Money tests examine not only the economic value of a project but also other factors that are hard to quantify, such as design quality and functionality, quality in construction, and the value of unquantifiable risks transferred to the private sector. In the United Kingdom, Value for Money analysis includes qualitative factors such as the viability, desirability, and achievability of the project in addition to the quantitative factors. Provinces such as Canada’s British Columbia and Australia’s Victoria also include qualitative factors in their financial assessments, including Value for Money analysis. Government officials stated that including both quantitative and qualitative factors in financial assessments such as Value for Money analysis provides a more comprehensive project assessment. In addition to determining whether a public-private partnership is advantageous over a publicly delivered project, project sponsors and government officials noted that a Value for Money analysis is also a useful management tool for considering up front all project costs and risks that can occur during a project’s lifetime, which is not always done in a conventional procurement. Project sponsors can also use financial assessments such as Value for Money analysis for other reasons. For example, Value for Money analysis can assist in determining which project delivery approach provides more value. Project sponsors can assess if one public-private partnership option is more advantageous than another if it is decided that private participation in a project is beneficial. For example, Bay Area Rapid Transit used a Value for Money analysis in its original iteration of the Oakland Airport Connector to assess which alternative project delivery approach (design-build-operate-maintain or design-build-finance-operate- maintain) would be more advantageous. Project sponsors can also use Value for Money to give a range of possible project costs when coupled with a sensitivity analysis. For example, a sensitivity analysis developed for the Canada Line suggested that project costs could have varied from $47 million more to $270 million less than expected, depending on the level of risk. A further example of how project sponsors can use Value for Money is to enhance communication about a project. Project sponsors noted that since Value for Money analyses are often publicly available, such as in the United Kingdom, they can lead to more-informed discussions and provide transparency in the selection of the project delivery approach. Thus, they can be good planning and communication tools for decision makers. Government officials and consultants that perform financial assessments, such as Value for Money analysis, cautioned that the assessments are not without limitations. For example, officials and consultants told us that these analyses are inherently subjective and rely on assumptions that can introduce bias. Assessments can include the assumption that the public sector will likely have higher construction costs due to a history of cost overruns. In the United Kingdom, an “optimism bias” of 15 percent is added to the public sector comparator in part to account for this. Consultants noted that there is subjectivity in valuing risks as detailed data on the probability of particular project risks occurring are unavailable. Thus consultants use data from past projects and their own professional views to conduct the analysis. In sum, government officials and consultants noted that Value for Money analysis should be considered as a tool rather than the sole factor in assessing whether to do a public-private partnership. Some countries have further protected the public interest in transit projects that use alternative approaches by establishing quasi- governmental entities to assist project sponsors in implementing these arrangements. Entities such as Partnerships UK, Partnerships Victoria, and Partnerships BC are often fee-for-service and associated with Treasury Departments on the provincial and national levels. These quasi- governmental entities all develop guidance such as standardized contracts and provide technical assistance to support transit projects that use alternative approaches. According to an advisor for project sponsors, contracts for these partnerships and approaches generally follow a standard model such as a framework for assigned risk between the project sponsor and private sector, with the particularities of local legislation and project specifics written into them. The United Kingdom’s standard contract outlines requirements as well as factors to consider from a project’s service commencement through termination, which is periodically updated to reflect lessons learned. For example, after the government of the United Kingdom required the private sector to share any refinancing gains with the project sponsor, the standard contract was subsequently updated. Furthermore, the quasi-governmental entities provide technical assistance to support transit projects that use alternative approaches. For example, Partnerships BC provides project sponsors assistance on conducting a Value for Money assessment to determine whether private sector participation in a project is beneficial. In addition to this assistance, these entities provide other varied services to facilitate public-private partnerships across different sectors. For example, Partnerships UK reviews project proposals for the government; Partnerships Victoria offers training for the province; and Partnerships BC advises project sponsors to help develop and close public-private partnership contracts in British Columbia. Quasi-governmental entities can further protect the public interest through the benefits they provide. According to government officials in the United Kingdom and Canada, these entities create a consistent approach to considering public-private partnerships, such as understanding a project’s main risks, which can reduce the time and costs incurred when negotiating a contract. Further, by using standardized contracts developed by these entities, project sponsors can reduce transaction costs—such as legal, financial, and administrative fees—of implementing transit projects that use alternative approaches. Moreover, project sponsors and consultants told us that entities like Partnerships UK and Partnerships BC can foster good public-private partnerships and help further protect the public interest by ensuring consistency in contracts and serving as a repository of institutional knowledge. Without the services provided by these quasi-governmental entities, project sponsors that plan to or use alternative approaches for a transit project will develop them on a case-by- case basis because they lack institutional knowledge and a centralized resource for assistance. While DOT has established an office to support project sponsors of highway-related public-private partnerships, DOT does not provide similar support for transit projects. In a previous GAO report, we noted that formal consideration and analysis of public interest issues had been conducted in U.S. highway public-private partnerships, and that DOT has done much to promote the benefits, but comparatively little to assist states and localities weigh potential costs and trade-offs of these partnerships. Since that report, the Federal Highway Administration’s Office of Innovative Program Delivery has been established to provide support for highway-related public-private partnerships by providing an easy, single- point of access for project sponsors and other stakeholders. The office is intended to offer outreach, professional capacity building, technical assistance, and decision-making tools for highway-related public-private partnerships. In addition, FTA officials told us that they have plans to develop an online toolset for employees to help them provide technical assistance to project sponsors on these alternative approaches. This assistance is to include checklists to help determine whether a project should use an alternative approach, risk matrices that provide an overview and explanation of risks transferred using such an approach, and a financial feasibility model that can be used to quantitatively compare the use of an alternative approach with the conventional approach to transit projects. Furthermore, in June 2009, the House of Representatives’ Committee on Transportation and Infrastructure’s surface transportation reauthorization blueprint proposed that an Office of Expedited Project Delivery be created within FTA to provide assistance to transit project sponsors much as we have outlined earlier in this report. However, such support is not currently available for project sponsors of transit projects that use alternative approaches. Project sponsors and their advisors noted that as there is little public sector institutional knowledge about public- private partnerships in the United States, projects may be carried out without the benefit of previous experiences. It is even more challenging to conduct transit projects that use alternative approaches in the United States given the variation in relevant state laws and local ordinances that project sponsors and other stakeholders must navigate. Furthermore, FTA’s New Starts evaluation requirements for transit projects seeking federal funding do not include an evaluation of whether the public is receiving the best value for its money as compared to other delivery approaches. Thus project sponsors, advisors, and government officials noted that such an entity in the United States could be valuable in further protecting the public interest in public-private partnerships. FTA distributes billions of dollars of federal funding to transit agencies for the construction of new, large-scale projects; as such, it is critical that the public interest is protected and federal funding is spent responsibly. Project sponsors are looking to transit projects that use alternative approaches to deliver and finance new transit projects, along with federal funds. However, because of its sequential and phased structure, FTA’s New Starts program is incompatible with transit projects that use these approaches. Congress recognized this concern when it authorized FTA to establish the Public-Private Partnership Pilot Program to illustrate how New Starts evaluation requirements can be streamlined to better accommodate the use of alternative approaches in transit projects. However, the pilot program has not yet illustrated how this can be done. This is because, on the one hand, FTA has determined that no pilot project has demonstrated enough of a transfer of risk—in particular a financial investment by the private sector—for FTA to consider granting major modifications to streamline its New Starts evaluation requirements. On the other hand, the potential challenges posed by the New Starts requirements, including delays and additional costs, may discourage the private sector from assuming enhanced financial responsibility in these alternative approaches. Despite this apparent impasse, FTA sill has the unique opportunity to take advantage of the fundamental characteristic of a pilot program— flexibility—to gain valuable insight on how to streamline the New Starts process to facilitate a greater private sector role in transit projects through the use of alternative approaches. FTA can introduce additional flexibility into its three pilot projects through, among other things, the use of existing, long-standing tools, such as Letters of Intent and Early Systems Work Agreements. Other agencies within DOT have used such tools successfully in the past to provide flexibility to their funding and approval processes and to advance and promulgate alternative project finance and delivery approaches. Moreover, some other countries have used conditional approvals to incorporate more flexibility into their funding processes and help encourage a greater private sector role in transit projects. FTA may want to turn to the experiences of these other modal administrations and governments and use existing, long-standing tools to incorporate more flexibility in the New Starts process to help facilitate transit projects that use alternative approaches. Without an evaluation plan to assess the results of its pilot program, FTA may also lose some valuable information Congress intended the agency to obtain through the pilot program’s establishment, including how the New Starts project approval process can be further streamlined. As more transit projects use alternative approaches, FTA may not be able to readily accommodate these approaches, ultimately disadvantaging transit project sponsors that seek to deliver their projects more quickly and efficiently and at a lesser cost to the public. In the past, DOT has done much to promote the potential benefits of transportation public-private partnerships. While these benefits are not assured and should be evaluated by weighing them against potential costs and trade-offs, DOT has done comparatively little to equip project sponsors to weigh the potential costs and trade-offs. Recently, DOT has taken a more integrated approach to a greater private sector role in transportation, as evidenced by its newly established Office of Innovative Program Delivery for public-private partnerships. Congress has taken a greater interest in facilitating alternative approaches as well. Quasi- governmental entities established by foreign governments have better equipped project sponsors to implement alternative approaches, including public-private partnerships, by creating a uniform method to considering the implications of alternative approaches, reducing transaction costs, ensuring consistency in contracts, and serving as a repository of institutional knowledge. FTA could consider these international models and expand its current efforts in transportation public-private partnerships to support a greater private sector role in transit directly to project sponsors. Expanded FTA efforts could facilitate the implementation of transit projects that use alternative approaches and protect the public interest through the use of tools such as standardized contracts, technical assistance, and financial assessments. To facilitate a better understanding of the potential benefits of alternative approaches in FTA’s Public-Private Partnership Pilot Program, if reauthorized, we recommend that the Secretary of Transportation direct the FTA Administrator to take the following actions: Incorporate greater flexibility, as warranted, in the Public-Private Partnership Pilot Program than has occurred to date by making greater use of existing tools such as Letters of Intent and Early Systems Work Agreements in order to streamline the New Starts process. Develop a sound evaluation plan for the Public-Private Partnership Pilot Program to accurately and reliably assess the pilot programs’ results that includes key factors such as: well-defined, clear, and measurable objectives; measures that are directly linked to the program objectives; criteria for determining pilot program performance; a way to isolate the effects of the pilot program; a data analysis plan for the evaluation design; and a detailed plan to ensure that data collection, entry, and storage are reliable and error-free. Beyond its pilot program, build upon efforts underway in DOT to better equip transit project sponsors in implementing transit projects that use alternative approaches, including developing guidance, providing technical assistance, and sponsoring greater use of financial assessments to consider the potential costs and trade-offs. We provided a draft of this report to DOT and FTA for review and comment. DOT has agreed to consider our recommendations and provided comments through e-mail from FTA officials. In their comments, FTA officials stated that the agency has ongoing and planned efforts as part of its Public-Private Partnership Pilot Program that they believe address the intent of our recommendations. For example, FTA officials noted that the agency has, as we reported, made use of tools such as Letters of Intent and Early Systems Work Agreements in the past in order to streamline the New Starts process, and that it will evaluate the potential for greater use of these existing tools in the future to incorporate greater flexibility into the pilot program. Additionally, FTA officials acknowledged the need for an evaluation plan to assess the pilot program’s results and stated they will be working to develop one. Further, FTA officials stated that FTA is working to develop technical assistance for its staff on how to structure and evaluate alternative approaches to transit projects; we revised our draft report to reflect FTA’s efforts. Because these efforts are either planned or in their early stages, we are retaining our recommendations. Finally, FTA officials provided technical comments, which we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees and DOT. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at [email protected] or (202) 512-2834. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Our work was focused on transit projects that involve greater private sector participation than is typical in conventional projects. In particular, we focused on (1) the role of the private sector in the delivering and financing of U.S. transit projects compared with other countries; (2) the benefits and limitations of and the barriers, if any, to greater private sector involvement in transit projects and how these barriers are addressed in the Department of Transportation’s (DOT) Public-Private Partnership Pilot Program; and (3) how project sponsors and DOT can protect the public interest in transit projects that use alternative approaches. Our scope was limited to identifying the primary issues associated with using public- private partnerships for transit infrastructure and not in conducting a detailed financial analysis of the specific arrangements. In order to clearly delineate alternative delivery and financing approaches used in transit, first we identified three categories—traditional, innovative, and alternative—that describe the evolution of such practices. We defined traditional financing to include federal grants (such as New Starts program grants), state and local public grants, taxes, and municipal bonds, and defined conventional project delivery to refer to the design-bid-build approach. We defined innovative financing to include loan or credit assistance such as the Transportation Infrastructure Financing and Innovation Act, Private Activity Bonds, Tax Increment Financing, State Infrastructure Banks, Grant Anticipation Notes, and Revenue Bonds, and innovative project delivery to refer to the design-build approach. Finally, we defined alternative financing to refer to public-private partnerships that involve private equity capital such as concession agreements and defined alternative approaches as ones that transfer greater risk to the private sector including: design-build, design-build-finance, design-build-operate- maintain, build-operate-maintain, design-build-finance-operate, design- build-finance-operate-maintain, build-operate-own, and build-own-operate, among others. We took several steps and considered various criteria in selecting which domestic transit projects to study as part of our review of alternative financing and project delivery practices. First, we reviewed transit project information from DOT, GAO, the Congressional Research Service, and other reports as well as conducted interviews with DOT officials, project sponsors, industry representatives, and academic experts to identify the potential universe of projects that fit at least one (alternative project delivery or alternative financing) or both of our established definitions. We also selected projects that were either completed or had already carried out substantial planning. The potential universe of projects contained 10 completed projects including: Denver Regional Transportation District Transportation Expansion Light Rail (design-build), South Florida Commuter Rail Upgrades (design-build), Minnesota Metro Transit Hiawatha Corridor Light Rail Transit (design-build), Bay Area Rapid Transit Extension to San Francisco International Airport (design-build), Washington Metropolitan Area Transit Authority Largo Metrorail Extension (design-build), Hudson-Bergen Light Rail Transit Minimum Operating Segment 1 (design-build-operate-maintain), Hudson-Bergen Light Rail Transit Minimum Operating Segment 2 (design-build-operate- maintain), John F. Kennedy Airtrain (design-build-operate-maintain), Portland MAX Airport Extension (design-build), and Las Vegas Monorail (design-build-finance-operate-maintain). We also included 3 ongoing transit projects as part of the universe: Bay Area Rapid Transit Oakland Airport Connector (design-build-operate-maintain), Denver Regional Transportation District East Corridor and Gold Line pilot projects (design- build-finance-operate-maintain), and Houston Metro North and Southeast Corridor pilot projects (design-build-operate-maintain). Second, we determined that we would focus solely on projects that have or are expected to go through the Federal Transit Administration’s (FTA) New Starts process given that this is the largest capital grant program for transit projects and that any such projects would be reviewed to protect the public interest (i.e., projects not entirely funded by the private sector). This eliminated the John F. Kennedy Airtrain, Portland MAX Airport Extension, and Las Vegas Monorail projects. Third, we applied three of four criteria from FTA’s Report to Congress to the remaining projects, including (1) project costs were reduced, (2) project duration was shortened, and (3) project quality was maintained or enhanced. This eliminated the South Florida Commuter Rail Upgrades, Hudson-Bergen Light Rail Transit Minimum Operating Segment 1 and Minimum Operating Segment 2, and the Bay Area Rapid Transit Extension to San Francisco International Airport. We decided to select all three of the ongoing pilot projects—Bay Area Rapid Transit Oakland Airport Connector, Denver Regional Transportation District East Corridor and Gold Line, and Houston Metro North and Southeast Corridors—given that FTA views these projects as currently having the most private sector potential and thus designated them as their three Public-Private Partnership Pilot Program projects. We also decided, given our limited resources, to select two of the remaining three completed projects—Minnesota Metro Transit Hiawatha Corridor and Denver Regional Transportation District Transportation Expansion—as DOT’s Report to Congress identified these two projects as having successful collaborations with their respective departments of transportation, including their highway counterparts, which have greater experience than transit in using alternative project delivery and alternative financing. This eliminated the Washington Metropolitan Area Transit Authority Largo Metrorail Extension. These projects were selected because they are recent examples of ongoing and completed transit projects in the United States that incorporated greater private sector involvement through the use of alternative project delivery or financing approaches or both. To select which international countries we would include as part of our review of alternative financing and project delivery practices, we conducted a literature review of international transit public-private partnerships as well as conducted interviews with DOT officials, project sponsors, industry representatives, and academic experts to identify the potential universe of countries with significant experience in transit public-private partnerships, including projects that fit at least one (alternative project delivery or alternative financing) or both of our established definitions. Second, we determined that we would collect the most valuable and relevant information from countries that share a similar political and cultural structure to the United States. Third, given our limited resources, we decided to select only two of the three remaining countries. Thus, we ultimately identified Canada and the United Kingdom for our international site visits. Issues discussed in the report related to the interpretation of foreign law, including the character of public-private partnership agreements, and their limitations, were evaluated as questions of fact based upon interviews and other supporting documentation. To determine how transit projects that use alternative approaches have been used in the United States, we collected and reviewed descriptions of the projects, copies of the concession or development agreements, planning documents, and documentation related to the financial structure of the projects in addition to academic, corporate, and government reports. We conducted, summarized, and analyzed in-depth interviews with project sponsors and private sector participants about their experiences with alternative financing and procurement in transit projects. We also reviewed pertinent federal legislation and regulations, including: Federal Register Notices and guidance for FTA’s Public-Private Partnership Pilot Program and the New Starts Program; DOT’s Report to Congress on the Costs, Benefits, and Efficiencies of Public-Private Partnerships for Fixed Guideway Capital Projects; and other DOT reports. To identify the potential benefits and potential limitations of transit projects that use alternative approaches, and what barriers project sponsors face in the United States, we conducted, summarized, and analyzed in-depth interviews with domestic project sponsors and private sector participants including private investors, financial and legal advisors, project managers, and contractors. In addition to these domestic experts, we conducted extensive interviews with various international stakeholders, experts, and private sector officials from Canada and the United Kingdom that were knowledgeable in greater private sector participation in the financing and procurement of transit projects. We also conducted a literature review; summarized and analyzed key benefits, limitations, and barriers to greater private sector participation; and interviewed FTA and other federal and local officials associated with the projects we selected as well as private sector officials involved with United States transit public-private partnership arrangements. To determine how project sponsors and DOT can protect the public interest in transit projects that use alternative approaches, we conducted site visits of selected transit public-private partnerships and visited the United Kingdom and Canada, which both had more experience conducting transit public-private partnerships. We conducted, summarized, and analyzed in-depth interviews with project sponsors, private sector participants, international stakeholders, and experts regarding the competitive procurement process, robust concession agreements, and Value for Money analyses, among other topics. We also examined international mechanisms that were implemented for projects including Croydon Tramlink, Docklands Light Railway, London Underground, Manchester Metrolink, and Nottingham Express Transit in the United Kingdom and the Canada Line in Vancouver, Canada, to provide insight on how project sponsors and DOT can protect the public interest in transit projects that use alternative approaches. We also held in-depth interviews with FTA on its steps to protect the public interest in federally funded transit projects with greater private sector participation including programs like FTA’s Public-Private Partnership Pilot Program and the New Starts Program. We conducted this performance audit from October 2008 through October 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Steve Cohen, Assistant Director; Jay Cherlow; Patrick Dudley; Carol Henn; Bert Japikse; Joanie Lofgren; Maureen Luna-Long; Amanda K. Miller; Tina Paek; Amy Rosewarne; Tina Won Sherman; and Jim Wozny made key contributions to this report. Equal Employment Opportunity: Pilot Projects Could Help Test Solutions to Long-standing Concerns with the EEO Complaint Process. GAO-09-712. Washington, D.C.: August 12, 2009. Public Transportation: Better Data Needed to Assess Length of New Starts Process, and Options Exist to Expedite Project Development. GAO-09-784. Washington, D.C.: August 6, 2009. Public Transportation: New Starts Program Challenges and Preliminary Observations on Expediting Project Development. GAO-09-763T. Washington, D.C.: June 3, 2009. High Speed Passenger Rail: Future Development Will Depend on Addressing Financial and Other Challenges and Establishing a Clear Federal Role. GAO-09-317. Washington, D.C.: March 19, 2009. Highway Public-Private Partnerships: More Rigorous Up-Front Analysis Could Better Secure Potential Benefits and Protect the Public Interest. GAO-08-1149R. Washington, D.C.: September 8, 2008. Public Transportation: Improvements Are Needed to More Fully Assess Predicted Impacts of New Starts Projects. GAO-08-844. Washington, D.C.: July 25, 2008. Highway Public-Private Partnerships: Securing Potential Benefits and Protecting the Public Interest Could Result from More Rigorous Up-front Analysis. GAO-08-1052T. Washington, D.C.: July 24, 2008. Highway Public-Private Partnerships: More Rigorous Up-front Analysis Could Better Secure Potential Benefits and Protect the Public Interest. GAO-08-44. Washington, D.C.: February 8, 2008. Federal-Aid Highways: Increased Reliance on Contractors Can Pose Oversight Challenges for Federal and State Officials. GAO-08-198. Washington, D.C.: January 8, 2008. Railroad Bridges and Tunnels: Federal Role in Providing Safety Oversight and Freight Infrastructure Investment Could Be Better Targeted. GAO-07-770. Washington, D.C.: August 6, 2007. Public Transportation: Future Demand Is Likely for New Starts and Small Starts Programs, but Improvements Needed to the Small Starts Application Process. GAO-07-917. Washington, D.C.: July 27, 2007. Public Transportation: Preliminary Analysis of Changes to and Trends in FTA’s New Starts and Small Starts Programs. GAO-07-812T. Washington, D.C.: May 10, 2007. Public Transportation: New Starts Program Is in a Period of Transition. GAO-06-819. Washington, D.C.: August 30, 2006. Public Transportation: Preliminary Information on FTA’s Implementation of SAFETEA-LU Changes. GAO-06-910T. Washington, D.C.: June 27, 2006. Equal Employment Opportunity: DOD’s EEO Pilot Program Under Way, but Improvements Needed to DOD’s Evaluation Plan. GAO-06-538. Washington, D.C.: May 5, 2006. Highways and Transit: Private Sector Sponsorship of and Investment in Major Projects Has Been Limited. GAO-04-419. Washington, D.C.: March 25, 2004.
As demand for transit and competition for available federal funding increases, transit project sponsors are increasingly looking to alternative approaches, such as public-private partnerships, to deliver and finance new, large-scale public transit projects more quickly and at reduced costs. GAO reviewed (1) the role of the private sector in U.S. public transit projects as compared to international projects; (2) the benefits and limitations of and barriers, if any, to greater private sector involvement in transit projects and how these barriers are addressed in the Department of Transportation's (DOT) pilot program; and (3) how project sponsors and DOT can protect the public interest when these approaches are used. GAO reviewed regulations, studies, and contracts and interviewed U.S., Canadian, and United Kingdom officials (identified by experts in the use of these approaches). In the United States, the private sector role in delivering and financing transit projects through alternative approaches, such as public-private partnerships, has been more limited than in international projects. The private sector role in U.S. projects has focused more on how they are delivered rather than how they are financed, while the private sector role in international projects has focused on both project delivery and financing. Since 2000, seven new large- scale construction projects funded through FTA's Fixed Guideway Capital Investment Program--New Starts program--have been completed using one of two alternative project delivery approaches, and none of these projects included private sector financing. In 2005, Congress authorized FTA to establish a pilot program to demonstrate the advantages and disadvantages of these alternative approaches and how the New Starts Program could better allow for them. Alternative approaches can offer potential benefits such as a greater likelihood of completing projects on time and on budget, but also involve limitations such as less project sponsor control over operations. The sequential and phased New Starts process is a barrier because it is incompatible with alternative approaches and thus does not allow for work to be completed concurrently, which can lead to delays and increased costs. Under its pilot program, FTA can grant major streamlining modifications to the New Starts process for up to three project sponsors, but has not yet granted any such modifications because FTA has found that none of the projects has transferred enough risk, in particular financial responsibilities, to the private sector. FTA has the ability within its pilot program to further experiment with the use of long-standing existing tools that could encourage a greater private sector role while continuing to balance the need to protect the public interest. This includes forms of conditional funding approvals used by other DOT agencies and international governments. FTA also lacks an evaluation plan to accurately and reliably assess the pilot program's results, including the effect of its efforts to streamline the New Starts process for pilot project sponsors. Without such a plan, agencies and Congress will be limited in their decision making regarding the pilot program. Transit project sponsors protect the public interest in alternative approaches through, for example, the use of performance standards and financial assessments to evaluate the costs and benefits of proposed approaches. Other governments have established entities to assist project sponsors in protecting the public interest. These entities have better equipped project sponsors to implement alternative approaches by creating a uniform approach to developing project agreements and serving as a repository of institutional knowledge. DOT can serve as a valuable resource for transit project sponsors by broadening its current efforts, including providing technical assistance and encouraging the use of additional financial assessments, among other measures.
You are an expert at summarizing long articles. Proceed to summarize the following text: DOD’s acquisition mission represents the largest buying enterprise in the world. The defense acquisition workforce—which consists of military and civilian program managers, contracting officers, engineers, logisticians, and cost estimators, among others—is responsible for effectively awarding and administering contracts totaling more than $250 billion annually. The contracts may be for major weapon systems, support for military bases, consulting services, and commercial items, among other things. A skilled acquisition workforce is vital to maintaining military readiness, increasing the department’s buying power, and achieving substantial long-term savings through systems engineering and contracting activities. DOD’s acquisition workforce experienced significant cuts during the 1990s following the end of the Cold War and, by the early 2000s, began relying more heavily on contractors to perform many acquisition support functions. DOD reported that from 1998 through 2008, the number of military and civilian personnel performing acquisition activities decreased 14 percent from about 146,000 to about 126,000 personnel. Amid concerns about the growing reliance on contractors and skill gaps within the military and civilian acquisition workforce, in April 2009, the Secretary of Defense announced his intention to rebalance the workforce mix to ensure that the federal government has sufficient personnel to oversee its acquisition process. To support that objective, DOD’s April 2010 strategic workforce plan stated that DOD would add 20,000 military and civilian personnel to its workforce by fiscal year 2015. Further, in 2008, Congress created DAWDF, codified in section 1705 of title 10 of the U.S. Code, to provide DOD a dedicated source of funding for rebuilding capacity, improving quality, and rebalancing the workforce. Congress has specified in statute the level of DAWDF funding for a given fiscal year and has adjusted that level several times. For example, the National Defense Authorization Act for Fiscal Year 2010 specified $100 million for fiscal year 2010; $770 million for fiscal year 2011; $900 million for fiscal year 2012; $1.2 billion for fiscal year 2013; $1.3 billion for fiscal year 2014; and $1.5 billion for fiscal year 2015. In the National Defense Authorization Act for Fiscal Year 2013, Congress extended the requirement for DOD to fund DAWDF through 2018 and revised the funding levels to $500 million for fiscal year 2013; $800 million for fiscal year 2014; $700 million for fiscal year 2015; $600 million for fiscal year 2016; $500 million for fiscal year 2017; and $400 million for fiscal year 2018. Currently, the law mandates $500 million in DAWDF funding for a fiscal year. However, the law also authorizes the Secretary of Defense to reduce annual funding if the Secretary determines that the mandated amount is greater than what is reasonably needed for a fiscal year. The amount may not be reduced to less than $400 million for a fiscal year. Section 1705 of title 10, U.S. Code, specifies three ways that DAWDF can be funded: Appropriations made for DAWDF. Appropriations were made for the fund in fiscal years 2010 through 2015 and were available for obligation for 1 fiscal year—the fiscal year for which they were appropriated. Credits, or funds that are remitted by DOD components from operation and maintenance accounts. Funds credited to the account are available for obligation in the fiscal year for which they are credited and in the 2 succeeding fiscal years. Transfers of expired funds. During the 3-year period following the expiration of appropriations to DOD for research, development, test and evaluation; procurement; or operation and maintenance, DOD may transfer such funds to DAWDF to the extent provided in appropriations acts. To date, Congress has granted authority for DOD to transfer operation and maintenance funds included in the appropriations acts for fiscal years 2014, 2015, and 2016 to DAWDF. Funds transferred to DAWDF are available for obligation in the fiscal year for which they are transferred and in the 2 succeeding fiscal years. Under current law, DOD is required to credit the fund $500 million for a fiscal year, as previously mentioned. However, the law directs that the amount required to be remitted by DOD components be reduced by any amounts appropriated for or transferred to DAWDF for that fiscal year. Collectively, from fiscal years 2008 through 2016, about $4.5 billion has been deposited into DAWDF using various combinations of these processes (see table 1). From fiscal years 2008 through 2016, DOD obligated about $2.2 billion— or about 60 percent—for recruiting and hiring and about $1.2 billion—32 percent—for training and development. The remaining $269 million, or 7 percent, was used for retention and recognition. To help support rebuilding the workforce, DOD obligated the most funds for recruiting and hiring through fiscal year 2015; however, in fiscal year 2016, DOD obligated slightly more for training and development than for recruiting and hiring (see fig. 1). Several organizations within DOD play key roles in the management and oversight of DAWDF (see table 2). DOD’s acquisition workforce management framework includes centralized policy, decentralized execution by the DOD components, and joint governance forums. DOD established the Senior Steering Board and the Workforce Management Group in 2008 to oversee DAWDF activities. The senior acquisition executives for the military departments, DOD functional acquisition career field leaders, and heads of major DOD agencies were designated as members of the Senior Steering Board, along with representatives from the Office of the Under Secretary of Defense (Comptroller) and Chief Financial Officer and the Office of the Under Secretary of Defense for Personnel and Readiness. This board is expected to meet quarterly and provide strategic oversight of DAWDF. The Workforce Management Group includes representatives from the offices on the Senior Steering Board, among others. It is expected to meet bimonthly and oversee DAWDF operations and management (see table 3). In June 2012, we reported on DOD’s initial implementation of DAWDF; we found that the ability of DOD components to effectively plan for and execute efforts supported by DAWDF was hindered by delays in DOD’s DAWDF funding processes and the absence of clear guidance on the availability and use of funds. We also found that HCI and Comptroller officials had differing views on how best to manage the DAWDF funding process. Comptroller officials acknowledged that they delayed sending out credit remittance notices and allowed components to delay crediting DAWDF funds. At that time, we recommended that DOD revise its DAWDF guidance to clarify when and how DAWDF funds should be collected, distributed, and used. We also recommended that DOD clearly align DAWDF’s funding strategy with the department’s strategic human capital plan for the acquisition workforce. DOD concurred with these recommendations. In October 2016, DOD completed an updated acquisition workforce strategic plan. We discuss our assessment of the extent to which DOD has taken action to address these recommendations later in this report. Further, we also recommended in June 2012 that DOD establish performance metrics for DAWDF to allow senior leadership to track how the fund is being used to support DOD’s acquisition workforce improvement goals. DOD concurred and subsequently established four metrics to track the defense acquisition workforce: (1) the size of the acquisition workforce, (2) the shape of the acquisition workforce, (3) defense acquisition workforce improvement act certification rates, and (4) the education level of acquisition workforce personnel. In its October 2016 acquisition workforce strategic plan, DOD reported that the cumulative efforts of the DOD components from fiscal year 2008 through fiscal year 2015 increased the size of the acquisition workforce by 24 percent, from about 126,000 to 156,000 personnel. The department accomplished this by hiring additional personnel, converting contractor positions to civilian positions, adding military personnel to the acquisition workforce, and administratively recoding existing personnel. DAWDF contributed to this success by helping to increase the size of the acquisition workforce and achieve a better balance of early-, mid-, and senior-career personnel. DOD reported that more than 96 percent of the acquisition workforce either met or was on track to meet certification requirements within required time frames. DOD also reported that the number of personnel with bachelor’s degrees or higher increased from 77 percent in fiscal year 2008 to 84 percent in fiscal year 2015, while those with graduate degrees increased from 29 percent to 39 percent over the same time period. These changes were accomplished during a period of budget uncertainties and sequestration, during which time DOD imposed hiring freezes and curtailed travel, training, and conferences, among other actions. In December 2015, we reported that DOD had accomplished some of its goals in rebuilding the acquisition workforce and used DAWDF to help in these efforts. While DOD increased the size of its acquisition workforce, we found that it had not reached its targets for 6 of 13 acquisition career fields, including those for 3 priority fields—contracting, engineering, and business. To ensure that DOD has the right people with the right skills to meet future needs, we recommended that DOD complete competency assessments, issue an updated acquisition workforce strategic plan, and issue guidance on prioritizing the use of funding. DOD concurred with our recommendations. We discuss DOD’s efforts to address these recommendations later in this report. DOD, enabled by recent congressional action, has improved its ability to fund DAWDF, which allowed DOD to fund DAWDF in 2 months, compared to the 24 months the credit funding process took in fiscal year 2014. Specifically, in the DOD Appropriations Act for Fiscal Year 2014, Congress authorized DOD to transfer operation and maintenance funds appropriated by the act to DAWDF consistent with section 1705 of title 10 of the U.S. Code, which permits DOD to transfer expired funds for 3 years following their expiration. The operation and maintenance funds appropriated under the act expired at the end of fiscal year 2014, so the transfer authority authorized by Congress and section 1705 of title 10 gives DOD the authority to transfer expired fiscal year 2014 funds to DAWDF in fiscal years 2015 through 2017. Congress subsequently enacted such transfer authority for both fiscal years 2015 and 2016. As a result, DOD is authorized to fund DAWDF by transferring expired fiscal year 2015 funds through fiscal year 2018 and expired fiscal year 2016 funds through fiscal year 2019. Enabled by this authority, the DOD Comptroller funded DAWDF with $477 million of expired funds in one transaction for fiscal year 2015 and $400 million of expired funds in one transaction for fiscal year 2016. The DOD Comptroller then allotted those funds to HCI in a single transaction for each of those fiscal years. Our analysis found that it took 2 months from the time that DOD submitted its written determination of the amount of DAWDF funding required for fiscal year 2015, from June 23, 2015, to when the DOD Comptroller transferred the funds into the DAWDF account on August 24, 2015. HCI officials said that as a result of the ability to transfer expired funds, they were able to distribute, or sub-allot, to components 75 percent of their approved fiscal year 2016 funding before the start of the fiscal year. In contrast, DOD often experienced delays in its previous funding process. Prior to 2015, DOD primarily relied on credits remitted by the DOD components to meet DAWDF funding requirements. To complete this process, the Comptroller calculated each component’s share of the required credit based on the amount specified in the law, offset by the amount of any annual appropriations made for DAWDF. The Comptroller then sent a notice to each component specifying the amount of the credit it was to remit by a specific date. After the components remitted the funds, the Comptroller allotted those funds to HCI, which in turn sub- allotted DAWDF funds to the components based on their approved plans for that year. When DAWDF was first enacted, credits were to be remitted to the fund not later than 30 days after the end of each fiscal quarter. In 2009, Congress amended the DAWDF legislation to require DOD components to remit credit funding not later than 30 days after the end of the first quarter of each fiscal year. However, our analysis found that under the credit funding process, the DOD Comptroller delayed sending out credit remittance notices and allowed components to delay remitting funds to DAWDF. In 2012, Comptroller officials said that they generally did not begin the process of collecting and distributing DAWDF funds before DOD received its annual appropriations to minimize the amount of credit funding collected from other DOD programs and that the funds should not be collected until necessary for DAWDF. These officials noted that this was particularly important during a continuing resolution period where DOD’s funding is often limited to the prior year’s appropriation level or less, which puts additional stress on other programs required to contribute funds to DAWDF. As a result, DOD components did not complete remitting credit funds within the time frames required by DOD for any year that the credit funding process was used. For example, the notice for fiscal year 2013 was sent in June 2013 and required components to remit credits by October 2013. However, the remittance process was not completed until September 2014, or 11 months past the required deadline. Similarly, for fiscal year 2014, the remittance process was not completed until May 2016, or 24 months after DOD submitted its written determination of the amount of DAWDF funding required for the fiscal year—the initiation of the funding process. Figure 2 compares the length of time between the initiation of the fiscal year 2014 funding process and the last credit remittance of DAWDF funding, to the fiscal year 2015 time frames for transferring expired funds. Despite the improved timeliness of funding DAWDF by transferring expired funds, DOD experienced a significant increase in the amount of carryover funds by the beginning of fiscal year 2016. Specifically, the carryover balance increased from $129 million as of October 1, 2014, to $875 million as of October 1, 2015, or nearly twice the amount DOD eventually obligated in fiscal year 2016. The growth in the amount of carryover was primarily due to the delay in the remittance of $509 million in funding for fiscal year 2014—or about 86 percent of the amount to be credited for that year—until 2015. As a result, about $869 million was deposited into HCI’s DAWDF account during fiscal year 2015, while components only obligated $358 million that year. Additional factors also contributed to the large carryover balance: According to HCI officials, DOD’s requirements were sometimes less than the minimum amount that DOD was required to put into DAWDF. For fiscal year 2014, for example, Congress mandated $800 million in DAWDF funding (which was reduced by the Secretary of Defense to $640 million, as permitted by the law), but the components only planned to obligate $498 million. HCI and component officials told us that delays in remittances and additional factors, such as hiring freezes, affected DAWDF execution for several years. Despite having $129 million in carryover funds, HCI instructed DOD components to delay execution of hiring and other planned fiscal year 2015 initiatives. HCI officials told us that because of the uncertainty of when the fiscal year 2014 credits would be remitted, they had to ensure that they had sufficient funds to pay the salaries of the personnel who had been hired in the previous 2 years using DAWDF funds. In addition, DOD components did not always obligate all of their DAWDF funding for each fiscal year. For example, for fiscal year 2015, the Defense Contract Management Agency requested $84.4 million in funding for hiring, training, and retention initiatives and was only able to obligate $61.9 million in that year. Similarly, in fiscal year 2015, the Air Force Materiel Command planned to spend $5.7 million in DAWDF funding for recruiting incentives. However, Air Force officials told us that because of delays in the remittance of fiscal year 2014 funds from the components, the Air Force was instructed to delay its hiring plans, which in turn affected the number of personnel available to accept the recruiting incentives offered. Of the $5.7 million approved for recruiting incentives that fiscal year, the command was only able to obligate $1.3 million. Overall, from fiscal years 2011 through 2016, DOD components obligated between 68 and 92 percent of the amount that HCI approved them to spend (see fig. 3). Congress acted to reduce the carryover balance in the National Defense Authorization Act for Fiscal Year 2017. The act requires, during fiscal year 2017, DOD to transfer $475 million to the Treasury from amounts available from credits to DAWDF. The act also requires DOD to transfer $225 million of the funds required to be credited to DAWDF in fiscal year 2017 to the Rapid Prototyping Fund. When coupled with DOD’s fiscal year 2017 spending and funding plans, we estimate that these actions will result in a carryover balance of about $156 million at the beginning of fiscal year 2018 in the DAWDF account, or about 26 percent of DOD’s estimated fiscal year 2018 spending (see fig. 4). With the transfer authority enacted by Congress in the DOD Appropriations Act for Fiscal Year 2016, and section 1705 of title 10, DOD is authorized, for example, to transfer operation and maintenance funds appropriated in fiscal year 2016, which expired on September 30, 2016, into DAWDF in fiscal years 2017, 2018, and 2019. If this transfer authority is not renewed to enable DOD to transfer expired funds beyond 2019, DOD stated that it will be required to revert back to the credit funding process that it had previously used. As of January 2017, there have been no changes to the guidance or any agreement between HCI and the Comptroller to address the issues we raised in our 2012 report about how to resolve the credit funding delays. During our current review, a Comptroller official reiterated that credit funding came at the expense of programs and activities that had been included in the President’s budget submission. We are not making new recommendations to address the funding process and continue to believe that DOD needs to implement the recommendation we made in 2012. Those actions, and the ability to transfer expired funds through fiscal year 2019, will provide DOD the time it needs to assess options, if necessary, to improve the credit funding processes. DOD has taken several actions to improve management and oversight processes for DAWDF over the past year, including issuing an updated acquisition workforce strategic plan and DAWDF operating guidance. DOD’s August 2016 DAWDF guidance required components to submit annual and 5-year spending plans and formalized the requirement to hold a midyear review to assess DAWDF execution and discuss best practices. However, additional opportunities exist to better align DOD’s strategic plan and DAWDF spending plans, improve consistency in how components are using the fund to pay for personnel to help manage the fund, and improve the quality of data on how the fund is being used. Specifically, DOD’s October 2016 strategic plan indicates that the department intends to shift its emphasis from rebuilding the workforce to improving its capabilities. DOD’s plan established four goals and related strategic priorities that it intends to use DAWDF to help support. The October 2016 strategic plan, however, does not identify time frames, metrics, or projected budgetary requirements associated with these goals and strategic priorities or clearly prioritize DAWDF funding toward achieving them. DOD components identified more than $3 billion in potential DAWDF funding requirements for fiscal years 2018 through 2022, which is expected to exceed available funding by $500 million over this period. Component policies and practices differ on the use of the fund to pay the salaries of staff members who help manage DAWDF and execute DAWDF initiatives. Further, component data we reviewed that were provided to HCI for inclusion in DOD’s DAWDF annual report to Congress and monthly oversight of the fund did not always accurately reflect the results of DAWDF-funded initiatives, which DOD officials attributed to resource constraints and the absence of processes to verify the data collected. In his June 2016 memorandum, the Under Secretary of Defense for Acquisition, Technology and Logistics stated that DOD intends to sustain the acquisition workforce size and continue to improve its professionalism. Similarly, DOD’s October 2016 acquisition workforce strategic plan for fiscal years 2016 through 2021 stated that DOD must sustain the acquisition workforce size, factoring in workload demand and requirements; ensure that its personnel continue to increase their professionalism; and continue to expand talent management programs to include recruitment, hiring, training, development, recognition, and retention incentives by using DAWDF and other appropriate tools. To accomplish this, the strategic plan identified four broad goals—making DOD an employer of choice; shaping the acquisition workforce; improving the quality and professionalism of the acquisition workforce; and improving workforce policies, programs, and processes—and related strategic priorities (see table 4). In 2012, we recommended that DOD clearly align DAWDF’s funding strategy with the department’s strategic human capital plan for the acquisition workforce. DOD concurred and stated that the department would continue to improve alignment of a revised funding strategy so that it supports successful execution of the workforce initiatives. However, while DOD’s October 2016 strategic plan provides an overall framework for the acquisition workforce and broadly indicates how DAWDF will be used to support these efforts, it does not identify time frames, metrics, or projected budgetary requirements associated with these goals or strategic priorities. As a part of our work on leading practices in strategic workforce planning, we have shown that determining the critical skills and competencies that agencies’ workforces need to achieve current and future agency goals and missions and identify gaps, including those that training and development strategies can help address, and developing customized strategies to recruit for highly specialized and hard-to-fill positions would be beneficial to strategic workforce planning. Because the new strategic plan does not provide a clear link between its goals for the acquisition workforce and how DAWDF funds should be used, it is unclear how the department is ensuring that DAWDF targets its most critical workforce needs. HCI and Director, Acquisition Career Management (DACM) officials noted that each military department has prepared or is preparing a workforce plan to help guide its efforts. Further, in our December 2015 report, we recommended that DOD issue an updated workforce plan that included revised career field goals and that HCI issue guidance to the components to focus hiring on priority career fields. DOD agreed that additional guidance was essential to ensure that DOD had the right people with the right skills to meet future needs, but noted that determining which career fields were a priority was most appropriately determined by the components. DOD stated that it would work with the components to issue guidance that would best meet both enterprise and specific component workforce needs. In that regard, the October 2016 strategic plan reiterated the need to shape the acquisition workforce to achieve current and future acquisition requirements but did not establish specific targets for the acquisition workforce as a whole or targets for specific career fields. HCI officials noted that DOD’s objective is to sustain the current level of the acquisition workforce and understand the workload demand. As part of its DAWDF planning process for fiscal year 2017, HCI requested data from the DACMs of each military department on their estimates for future DAWDF hiring through fiscal year 2022. Detailed breakouts by career field were not required. At the component level, we found a range of direction and data on future hiring efforts. For example: The Army’s fiscal year 2017 memorandum accompanying its call for DAWDF funding requests stated that commands should target hiring requests in the following areas: financial management, cost estimating, contracting, engineering, science and technology, and program management. The Army DACM office provided HCI an estimate of planned hires by career field for fiscal year 2017, which indicated that about 80 percent of the Army’s fiscal year 2017 DAWDF hires were planned for the contracting and engineering career fields. The Navy’s fiscal year 2017 guidance accompanying its call for DAWDF funding requests does not specify which acquisition career fields to target for hiring requests, but Navy DACM officials stated that they do obtain input from the commands regarding their acquisition workforce hiring needs. The Navy indicated that it plans to hire 255 entry-level personnel and an additional 100 in the next 5 years to address attrition in contracting and to hire engineers in new areas such as cybersecurity. The Air Force’s March 2016 DAWDF guidance highlighted that DAWDF funds would be used to support the program management, contracting, and test and evaluation career fields, among others, but it does not specify critical career fields where DAWDF hiring should be focused. Air Force officials stated that their fiscal year 2017 DAWDF program guidance did not request hiring initiatives, but the Air Force made a separate call for hiring requirements as a part of an overarching Air Force program for force renewal, which would be augmented by DAWDF for acquisition hiring. This separate call did not specify the number of hires by career field. DOD’s October 2016 acquisition workforce strategic plan noted that one of DOD’s goals was to shape the acquisition workforce to achieve current and future acquisition requirements. The absence of revised career field goals, coupled with the variation in the details provided by DOD components, underscores the importance of further management attention and guidance in this area, consistent with our December 2015 recommendation. DOD has taken a number of recent actions to mature its management and oversight of the fund, including issuing DAWDF operating guidance in August 2016 and initiating efforts to enhance long-range planning and improve component reporting requirements. HCI officials stated that until recently, HCI did not require DOD components to estimate requirements across the time period covered by the Future Years Defense Program, in part because DOD officials were uncertain whether DAWDF would be permanent. As such, HCI required components to focus their efforts on identifying initiatives that could be funded in the upcoming fiscal year. Further, HCI and DACM officials noted that because DAWDF was intended to supplement other sources of funding that may already be available, components often used the flexibility provided by DAWDF to address more short-term gaps and emerging needs for training and retention initiatives, which may not lend themselves to long-term strategic planning. For example, in fiscal year 2015, the Defense Acquisition University, the Navy, and the Army each provided cybersecurity-related training using DAWDF, including master’s level college courses in cybersecurity and a Naval Postgraduate School cybersecurity certificate program. In its August 2016 guidance, however, HCI directed each DACM to compile, among other things, annual and 5-year hiring and spending plans. According to this guidance, DOD components are to identify opportunities for using DAWDF and provide funding requests to their DACMs for review and approval. In turn, the guidance requires each DACM to ensure that DAWDF proposals are integrated and coordinated within each component. The timing of this process varies by component, but the acquisition commands we met with start this effort between February and April. HCI typically requests that the components submit upcoming fiscal year requests for review in July and meets with components in August so that plans can be approved by the end of the fiscal year in September. HCI and DACM officials stated that they are working to improve the planning process and to develop better estimates of DAWDF needs. Overall, HCI approved $579 million in fiscal year 2017 DAWDF initiatives, an increase of 20 percent over the $482 million approved for DOD’s fiscal year 2016 initiatives. According to military department DACM officials, the increase includes plans to hire additional personnel in contracting, information technology, and test and evaluation. To execute its fiscal year 2017 initiatives, DOD expects to use both carryover funds and expired funds that will be available for obligation once they are transferred to DAWDF. Further, DOD components identified more than $3 billion in potential DAWDF funding requirements from fiscal years 2018 through 2022. As submitted, the components’ collective annual DAWDF funding requirements over this period ranged from $591 million in fiscal year 2018 to $628 million by fiscal year 2022 (see table 5). Of the more than $3 billion in estimated funding requirements for fiscal years 2018 through 2022, DOD components reported that they planned to request about $1.2 billion—or about 41 percent—to hire more than 6,000 new acquisition personnel. Components also reported they plan to request about $1.4 billion—or about 46 percent—for training for the acquisition workforce, developing new talent, and targeting competency gaps, while another $258 million—or 8 percent—would be requested for retention and recognition. As reflected in Table 5, above, the components’ collective estimated annual DAWDF funding requirements exceed $500 million in each of fiscal years 2018 through 2022. HCI and the components will need to prioritize funding requests since estimated funding requirements may exceed available DAWDF funding over this period. HCI, DACM, and acquisition command officials noted that providing management and oversight is complicated by differing views over whether DAWDF funds can be used to pay for management personnel. For example, officials at the Naval Sea Systems Command—which has more than 18,000 acquisition workforce personnel—told us it has one full- time DOD civilian who is responsible for managing DAWDF and overseeing its initiatives. Command officials told us that they use their operation and maintenance budget to pay this DAWDF fund manager. We identified differences of opinion by HCI, DACM, and acquisition command officials on whether their offices could use DAWDF to help pay for personnel to manage the fund and under what circumstances. For example: HCI officials said that their office does not use DAWDF funding to pay for personnel to manage DAWDF. HCI’s August 2016 guidance indicates that DAWDF can be used to hire interns, entry-level personnel, journeymen, experts, and highly qualified experts assigned to an acquisition career field. The guidance prohibits using DAWDF to pay the base salary of any person who was a DOD acquisition workforce member as of January 28, 2008, and who has continued in the employment of the department since such time without a break in such employment of more than 1 year. The Air Force DACM approved the use of DAWDF to pay the salaries of at least 12 civilian acquisition and nonacquisition workforce personnel to manage DAWDF initiatives in hiring and training, as well as to manage DAWDF itself. These personnel were located within the Air Force Personnel Center, the Air Force Institute of Technology, and the Air Force Materiel Command. The Air Force’s March 2016 guidance specifically permits using DAWDF to pay for personnel to support and execute DAWDF initiatives. The guidance does not specify whether those personnel must be acquisition workforce members. Naval Sea Systems Command officials told us that they believed that DAWDF could not be used to hire any personnel to help manage DAWDF. However, the Navy DACM told us that the Navy as a whole had approximately 16 full-time equivalents supporting the management and execution of DAWDF. Five of these 16 positions were funded by DAWDF and were not acquisition coded. The Army Contracting Command received approval from the Army DACM to use DAWDF to pay for a DAWDF fund manager, which the Army identified as an acquisition-coded position. Army DACM officials told us that they do not believe that DAWDF funds can be used to pay for DAWDF personnel to manage the fund unless they are in acquisition-coded positions. The Army’s October 2016 guidance specifies that DAWDF may be used for new hires placed in acquisition-coded positions. Defense Logistics Agency officials told us that they believed they were not allowed to use DAWDF to pay the salaries of any personnel responsible for managing DAWDF. As a result, the Defense Logistics Agency uses its regular budget to pay the salary of the person responsible for overseeing its DAWDF initiatives. Federal internal control standards indicate that sufficient management personnel are needed to oversee federal programs and that agencies need clear and consistent policies and procedures to support accomplishment of agency objectives consistently. HCI’s August 2016 guidance, however, did not clearly indicate whether DOD components could use DAWDF to pay the salaries of personnel to manage DAWDF and under what conditions, while the guidance at the military departments is not consistent on the issue. Without additional clarification on whether DAWDF funds may be used to pay for personnel to manage DAWDF, and under what conditions, DOD components will continue to be at risk of not using DAWDF funding consistently, or, if DAWDF can be used to help manage and oversee the fund, potentially missing opportunities to enhance management and oversight. DOD’s August 2016 guidance identifies several new and maturing processes HCI will use to improve DOD’s management and oversight of DAWDF. For example, in addition to the requirement for the DOD components to submit annual and 5-year spending plans, DOD’s August 2016 guidance formalized the requirement to hold a midyear review to assess DAWDF execution and discuss best practices, among other issues, as a part of HCI’s management and oversight of the fund. HCI conducted midyear reviews in 2015 and 2016 and believes they were beneficial. Building on the midyear review, the August 2016 guidance also includes a new requirement for all DAWDF users to submit annual year in review reports beginning in 2016. Required data include a summary of the implementation of DAWDF initiatives in a standardized format, including details on hiring—by career field—and training, recruiting, and retention initiatives. According to HCI officials, these data will be used to compile the DAWDF annual report to Congress and provide more detailed and consistent information on the execution of the fund. HCI officials also hold monthly teleconferences with components to discuss funding requests and execution. Nevertheless, HCI and component officials we spoke with acknowledged shortcomings in how they collected and reported data on DAWDF-funded initiatives, citing resource constraints and the absence of processes to verify the data collected. We found as a part of our review of fiscal year 2015 DAWDF initiatives that officials managing DAWDF did not have complete and accurate data on DAWDF-funded initiatives to meet reporting requirements and oversee the fund. To help meet congressional reporting requirements and assess fund execution, HCI requested that DOD components submit highlights of their DAWDF accomplishments for the year for inclusion in DOD’s annual report to Congress. However, we found that the components did not collect complete and accurate information on their efforts, which at times were reflected in DOD’s report to Congress. For example, DOD’s fiscal year 2015 report to Congress highlighted that DAWDF funded a total of 287 student loan repayments, but the Army alone provided us documentation that it awarded student loan repayments to 762 recipients that year using DAWDF. Further, HCI requires DOD components to submit a monthly report to track program execution status. This report is intended to capture the monthly spending plan and execution against that plan, hiring data, and accomplishments associated with training and other incentives. However, DOD components did not always submit monthly reports. For example, of the 20 components that obligated DAWDF funds in fiscal year 2015, only 7 components provided HCI monthly reports in September 2015. HCI stated that this was because key DAWDF personnel transitioned to different jobs during the September and October 2015 time frame. We also found that some information provided by the components to HCI as a part of their monthly reporting requirements was either incomplete or inaccurate. For example: The Army did not report any tuition assistance recipients to HCI at the end of fiscal year 2015, but the Army Materiel Command provided documentation showing that it provided DAWDF-funded tuition assistance to 233 acquisition workforce personnel. Army officials explained that the discrepancy was because the acquisition personnel who received DAWDF-funded tuition assistance were reported under a different category. Similarly, the Air Force DACM reported to HCI that the Air Force used DAWDF to help provide student loan repayment benefits to 8 personnel in fiscal year 2015. The Air Force Materiel Command told us that there were 32 recipients in the same year, but our analysis indicated that the actual number was 40. While the actions taken to improve management and oversight processes, if fully implemented, can help address the issues we identified during our review of fiscal year 2015 initiatives, it is not clear that these new processes include specific steps to verify the data that are collected and reported. Federal internal control standards state that programs need accurate data to determine whether they are meeting their agencies’ strategic and annual performance plans and meeting their goals for accountability for effective and efficient use of resources. To meet this standard, programs require procedures to verify that required data are complete and accurate. Without taking actions to ensure that the data reported are complete and accurate, HCI and DOD components increase their risk that they will not be able to determine whether they are meeting their goals or provide accurate information for DOD’s annual DAWDF reports to Congress. DOD’s use of DAWDF is at a critical juncture, in which it will no longer use the fund to grow the workforce but rather to sustain and build on the progress made over the past 9 years. Recent congressional actions have provided more stability in the level of funding to be credited to DAWDF, authorized the transfer of expired funds to DAWDF through fiscal year 2019, and addressed the carryover of unobligated DAWDF funds. Taken as a whole, these actions should facilitate DOD’s efforts to manage DAWDF but also require that DOD take greater initiative to maximize the opportunities these changes provide. DOD’s October 2016 strategic plan provides an overall framework for the acquisition workforce and broadly indicates how DAWDF will be used to support these efforts, but it does not identify time frames, metrics, or projected budgetary requirements associated with these goals or strategic priorities. Further, the components’ future DAWDF funding requirements average more than $600 million a year through fiscal year 2022—or $100 million more per year than DOD officials told us that they can put into DAWDF for a fiscal year. Clearly aligning DAWDF funding with DOD’s strategic plan—as we recommended in 2012—may help DOD determine how to prioritize component spending plans. At the tactical level, our work found that DOD components’ guidance, practices, and views on whether they could use DAWDF to pay for personnel to help manage the fund varied. Our work also found that components collected and reported data to HCI on DAWDF-funded initiatives that had not been verified, attributable in their view to resource constraints and the absence of processes to ensure the accuracy and completeness of the data. Addressing these issues in a timely fashion is necessary for sound management of the fund and is consistent with federal internal control standards. We recommend that the Director of Human Capital Initiatives take the following two actions: Clarify whether and under what conditions DAWDF funds could be used to pay for personnel to help manage the fund. In collaboration with cognizant officials within DOD components, ensure that components have processes in place to verify the accuracy and completeness of data on the execution of initiatives funded by DAWDF. We provided a draft of this report to DOD for comment. In its comments, reproduced in appendix II, DOD partially concurred with both of the recommendations, and indicated actions that will be or have been taken to address them. DOD also provided technical comments, which we incorporated as appropriate. In response to our recommendation that DOD clarify whether DAWDF funds could be used to pay for personnel to help manage the fund, DOD stated that the next release of the DAWDF Desk Operating Guide would provide the recommended clarity. In response to our recommendation that DOD ensure that processes are in place to verify the accuracy and completeness of data on the execution of DAWDF initiatives, DOD noted that it had made significant management and other changes to improve the accuracy and completeness of data used and provided by components on the execution of initiatives funded by DAWDF. DOD noted that it had, among other actions, assigned a full-time DAWDF program manager; issued guidance to improve data validity, consistency, and alignment; and instituted a midyear execution review and established a requirement for a data-driven year in review. Several of these changes were made or were in process in 2016, which we identified in our draft report. If these management and policy changes are effectively translated into practice, we believe these actions will address the intent of the recommendation. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, the Air Force, and the Navy; the Under Secretary of Defense for Acquisition, Technology and Logistics; the Under Secretary of Defense (Comptroller) and Financial Management; and the Director of Human Capital Initiatives. In addition, the report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report examines (1) the process the Department of Defense (DOD) uses to fund the Defense Acquisition Workforce Development Fund (DAWDF) and (2) DOD’s management and oversight of DAWDF initiatives. To conduct our work, we selected the acquisition command within each military department that had the largest number of acquisition workforce personnel in fiscal year 2015: Department of the Army, Army Materiel Command; Department of the Navy, Naval Sea Systems Command; and Department of the Air Force, Air Force Materiel Command. We also selected the Defense Logistics Agency, which had the second largest number of acquisition personnel of the other defense agencies that obligated DAWDF funds in fiscal year 2015. Collectively, the three military departments and the Defense Logistics Agency comprised 88 percent of DOD’s acquisition workforce and received the majority of DAWDF funds in fiscal year 2015. To examine the process DOD uses to fund DAWDF, we reviewed relevant legislation as well as DOD-wide and component guidance on the use of DAWDF funding. We analyzed the amount of carryover and estimated carryover funds from fiscal years 2008 through 2018. We reviewed key documents, including DOD funding authorization documents and DOD’s annual reports to Congress on DAWDF from fiscal years 2008 through 2015. We assessed the timeliness of DOD’s funding process by comparing data on key points in the funding process, including when the DOD Comptroller deposited funds into the DAWDF account, and analyzed documentation showing when the funds were allotted and obligated from fiscal years 2008 through 2016. To evaluate DOD’s DAWDF management and oversight processes, we took several steps. We reviewed relevant legislation, DOD’s 2010 DAWDF guidance for components and its August 2016 DAWDF Desk Operating Guide, which includes information on the annual planning, proposal, review, approval, and funding processes; we also reviewed guidance issued by each of the military departments. We also assessed the Department of Defense Acquisition Workforce Strategic Plan, FY 2016 – FY 2021, which was completed in October 2016. We analyzed DAWDF future spending estimates from fiscal year 2017 through fiscal year 2022 submitted to the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics - Human Capital Initiatives (HCI) by each of the military departments and other defense agencies. In addition, we analyzed monthly DAWDF spending reports from fiscal year 2015, DAWDF midyear review documentation from fiscal years 2015 and 2016, and briefing materials from DAWDF governance meetings from fiscal years 2015 and 2016. We also interviewed officials from HCI, the offices of the Directors for Acquisition Career Management from each military department, and acquisition command officials about DOD’s long-term strategic planning efforts related to DAWDF. Further, we used Standards for Internal Control in the Federal Government to identify criteria regarding the types of control activities that should be in place to verify data. These criteria include top-level reviews of actual performance, reviews by management at the functional or activity level, establishment and review of performance measures and indicators, proper execution of transactions, and other steps to ensure the completeness, accuracy, and validity of reported data. To evaluate DOD’s DAWDF program execution and adherence to reporting requirements, we compared DAWDF data submitted by DOD components at the end of fiscal year 2015 with data obtained from officials responsible for executing DAWDF initiatives, HCI’s monthly reporting requirements for DAWDF, and data reported in DOD’s DAWDF fiscal year 2015 annual report to Congress. In addition, we spoke with HCI and component officials about the quality of the data. We describe instances of incomplete and inaccurate data where appropriate in our report. To obtain an understanding of how the planning, review, and implementation processes for DAWDF initiatives worked, we selected a nongeneralizable sample of 10 fiscal year 2015 DAWDF initiatives. Our sample included 3 initiatives from each department—1 from each of the three major initiative categories—that were among initiatives with the highest dollar values. In addition, we selected 1 initiative from the Defense Logistics Agency. (See table 6.) For these initiatives, we collected and reviewed relevant documentation and data and interviewed cognizant component officials. To verify whether military department recipients of DAWDF-funded tuition assistance and student loan repayment were members of the defense acquisition workforce in the year that they received the benefit, we selected a nongeneralizable sample of 276 recipients across the military departments for fiscal year 2015 from the lists of recipients provided by the military departments. Because the programs are managed separately by each military department, we selected student loan repayment program recipients and tuition assistance recipients from each of the military departments (see table 7). Because we used a nongeneralizable sample, our findings cannot be used to make inferences about all DAWDF recipients. To determine whether these recipients were members of the acquisition workforce, we verified that the recipients were in DataMart, DOD’s acquisition workforce database, the year that they received the benefit. To assess the reliability of DOD’s DataMart data, we (1) reviewed existing information about the data and the system that produced them, (2) reviewed the data for obvious errors in accuracy and completeness, and (3) worked with agency officials to identify any data problems. When we found discrepancies, we brought them to DOD’s attention and worked with DOD officials to correct the discrepancies. For example, in those instances where we could not verify a name in DataMart, we contacted military department officials to obtain additional information that allowed us to confirm that those recipients were a part of the acquisition workforce. We also interviewed agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. To address both objectives, we interviewed representatives from the following DOD organizations during our review: Office of the Secretary of Defense Office of the Under Secretary of Defense for Acquisition, Technology Office of the Under Secretary of Defense (Comptroller) and Chief Defense Finance and Accounting Service Office of the Joint Chiefs of Staff (J-4) Director, Acquisition Career Management Army Acquisition Support Center Research, Development and Engineering Command Director, Acquisition Career Management Naval Sea Systems Command Director, Acquisition Career Management Air Force Materiel Command 4th Estate Director, Acquisition Career Management We conducted this performance audit from March 2016 to March 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Cheryl Andrew (Assistant Director), James D. Ashley, Emily Bond, Lorraine Ettaro, Meafelia P. Gusukuma, Kristine Hassinger, Katheryn Hubbell, Heather B. Miller, Roger R. Stoltz, Roxanna Sun, Alyssa Weir, Nell Williams, and Lauren Wright made key contributions to this report.
Congress established DAWDF in 2008 to provide DOD with a dedicated source of funding to help recruit and train members of the acquisition workforce. Since 2008, DOD has obligated more than $3.5 billion to meet those objectives. However, in 2012, GAO reported that DOD's ability to execute hiring and other initiatives had been hindered by delays in the DAWDF funding process, resulting in a large amount of unused funds being carried over from year to year. GAO was asked to review DOD's management of DAWDF. This report examines (1) the process DOD uses to fund DAWDF and (2) DOD's DAWDF management and oversight. GAO analyzed relevant legislation; DOD's, the military departments', and other defense agencies' guidance and processes; and DAWDF budget and initiative execution data. GAO also interviewed DOD officials and conducted a nongeneralizable sample of 10 fiscal year 2015 DAWDF initiatives based on type of initiative and dollar value. The Department of Defense (DOD), enabled by congressional action, has improved the timeliness of the funding process for the Defense Acquisition Workforce Development Fund (DAWDF). For fiscal year 2015, DOD was authorized to transfer expired funds, which allowed it to fund DAWDF in 2 months. In contrast, for fiscal year 2014, DOD relied on the military departments and other defense agencies (referred to as components) to remit funds to the DOD Comptroller, which took 24 months to complete. As a result, hiring, training, and other initiatives were delayed. Congress also took action in 2016 to reduce the amount of funding carried over from year to year, which totaled $875 million at the beginning of fiscal year 2016, or nearly twice the amount DOD eventually obligated for that year (see figure). GAO estimates that the amount of carryover funds at the beginning of fiscal year 2018 will be reduced to about $156 million. In the past year, DOD has taken several actions to improve its management and oversight of DAWDF, including issuing an updated acquisition workforce strategic plan and DAWDF operating guidance. For example, DOD's August 2016 DAWDF guidance required components to submit annual and 5-year spending plans and formalized the requirement to hold a midyear review to assess DAWDF execution and discuss best practices. However, GAO found that DOD components identified more than $3 billion in potential DAWDF funding requirements for fiscal years 2018 through 2022, which may exceed available funding over this period. Clearly aligning DAWDF funding with DOD's strategic plan—as GAO recommended in June 2012—may help DOD determine how to prioritize these requirements. GAO also found that components' guidance, practices, and views on whether they could use DAWDF to pay for personnel to manage the fund varied. Further, GAO found components did not have processes to verify the accuracy and completeness of data reported on DAWDF-funded initiatives. Internal control standards indicate that consistent policies and accurate data can help ensure that funds are used effectively and as intended. Without such controls, DOD could be missing opportunities to use DAWDF more effectively to improve its acquisition workforce. DOD should (1) clarify whether DAWDF funds could be used to pay for personnel to help manage the fund and (2) ensure that DOD components have processes in place to verify the accuracy and completeness of DAWDF data. DOD partially concurred with both recommendations, and has taken or plans to take actions to address them.
You are an expert at summarizing long articles. Proceed to summarize the following text: For several years after the fall of the South Vietnamese government in 1975, countries in Southeast Asia agreed to grant temporary asylum to the thousands of people who fled Vietnam. By the late 1980s, however, the rate of resettlement was far less than the huge and growing influx of asylum seekers from Vietnam. In response, the CPA was developed and adopted by 75 countries in June 1989 to address the Vietnamese boat people problem. It required anyone who arrived in first asylum countriesafter March 1989 to undergo a formal refugee status determination and demonstrate they had a well-founded fear of persecution for reasons of race, religion, nationality, membership of a particular social group, or political opinion according to internationally recognized refugee standards and criteria. The agreement called for the establishment of a consistent regionwide refugee status determination process and reaffirmed that the first asylum countries were responsible for determining who qualified as a refugee. UNHCR’s role under the CPA was to help the first asylum countries develop screening procedures that were consistent with international norms and to monitor the implementation of the program. It was responsible for training first asylum country officials involved in the screening process, coordinating the timely resettlement of those determined to be refugees, and administering a safe repatriation program for nonrefugees. In addition, UNHCR was required to review and assess the CPA’s implementation and consider additional measures to improve the effectiveness of the program. UNHCR also has independent authority under its charter to formally recognize or “mandate” cases it believes deserve refugee status and, within the context of the CPA program, to reconsider the claims of rejected asylum seekers. This authority provided a third layer of review in many situations. Since the adoption of the CPA, more than 120,000 Vietnamese asylum seekers have been screened for refugee status. Of those asylum seekers screened, close to 33,000 were determined to be refugees and resettled in third countries, including some 12,900 individuals who came to the United States. The screening of cases generally concluded in the region in 1994, and many of those who were determined not to be refugees (“screened out”) returned voluntarily to Vietnam. However, in early 1995, close to 40,000 screened-out asylum seekers remained in the camps. From early on in the screening process, outside advocacy groups and representatives from the U.S. Congress and other interested countries raised concerns about the integrity and fairness of the screening process and possible corruption that may have occurred in certain circumstances. These concerns intensified as the screening process came to a close and attention was focused on the screened-out asylum seekers who remained in the first asylum country camps. Some of the issues raised by refugee advocates included asylum seekers not knowing how to present their cases; information being distorted because of poor translation by interpreters; screening officials conducting incomplete interviews; reasons for denial not being provided to asylum seekers, thereby preventing them from preparing adequate appeals; and screening officials not having access to accurate information about country conditions in Vietnam. Refugee advocates also alleged that corruption in some countries resulted in asylum seekers with strong cases being screened out for failing to pay bribes or consenting to sexual demands by screening officials. The pressure from outside sources led UNHCR to further review some selected cases and investigate the allegations of corruption in the screening process. UNHCR acknowledged that problems existed in the screening process, particularly in the early stages of screening with the use of unqualified interpreters, delays in the processing of refugee status decisions, and the lack of legal assistance on appeals in some cases. It also concluded that corruption cannot be ruled out and that, at least in Indonesia and the Philippines, the impact of corruption was likely to have resulted in some weaker cases being screened in by the first asylum government officials. A specific tenet of the CPA was the “need to respect the family unit.” According to the UNHCR Handbook, if the head of a family is granted refugee status, then his or her spouse and members of the immediate family are also normally granted refugee status to maintain the family unit.Children who are minors are generally considered to be part of the immediate family; others, such as aged parents, may be included if they are living in the same household and dependency can be established. Under the principle of family unity, family members do not have to establish an independent well-founded fear of persecution; rather, refugee status is based on their family connection to a refugee. Adult children who are not dependent on their parents are not eligible for family unity consideration and would undergo separate screening under the CPA. Throughout the implementation of the CPA, UNHCR received numerous requests to consider cases under the principle of family unity. These involved individuals who claimed to have been separated from family members by the refugee screening process. Several included marriages that were not known or accepted as legitimate marriages by the first asylum countries or UNHCR; others involved children, siblings, or other relatives who claimed to have family linkages to individuals who had already resettled in a third country. Recognizing that some families may have been inadvertently split by the screening process, UNHCR undertook a broad regionwide review of screened-out cases in late 1994. By using its mandate authority, UNHCR could provide a means for screened-out family members to join immediate family members currently resettled elsewhere. UNHCR used the following criteria to review cases that might not have been assessed fairly on the basis of family unity during the regular refugee status determination process: Minors and dependant children were to be reunited with parents. Minors were defined as being younger than 18 at the time of the UNHCR review and dependency was based on the “totality of needs and relations.” Nonminor children and siblings who were not dependents were not considered for family reunification. Marriages predating the determination of refugee status were recognized regardless of whether couples had any children. Marriages postdating the refugee status decision would generally not be accepted unless they were proven to be “bona fide” and there was evidence of a “long-standing, pre-existing” relationship (the existence of children was proof). Obstacles to marriage were to be considered, such as difficulties in obtaining divorce papers from Vietnam, the asylum country not allowing formal marriages to take place (as was the case in Indonesia, for example), or a couple being underage. A key objective of the family unity initiative was to recognize legitimate marriages and relationships. UNHCR rejected marriages of convenience and other relationships that did not involve immediate family members or dependents. Many of the cases were relatively straightforward; however, several involved complicated family relationships that were difficult to resolve. Relationships that were not split as a result of the status determination process were also excluded from consideration, such as those involving family members who were resettled from a first asylum country prior to the CPA or from Vietnam directly through the Orderly Departure Program (ODP). UNHCR encouraged family relatives who did not qualify for reunification to return to Vietnam and use alternative migration opportunities, such as ODP. UNHCR’s family unity review began in Malaysia and was adopted shortly afterwards by other UNHCR field offices in the region. According to UNHCR officials, the resettlement countries and other CPA member countries initially criticized the initiative. The first asylum countries believed that family unity considerations had already been addressed during the regular refugee status determination process and that further review of cases would jeopardize an orderly conclusion to the CPA program. Resettlement countries believed that family unity considerations were more properly effected through their own established migration programs, such as ODP. In Malaysia, Indonesia, and the Philippines, UNHCR reviewed hundreds of cases under its family reunification initiative. Most were rejected for failing to meet UNHCR’s established criteria, but UNHCR believed a small number had valid claims and forwarded them to various resettlement countries for consideration. UNHCR initially forwarded cases without a declaration of mandate because it wanted some assurance that cases would be accepted for resettlement. UNHCR wanted to avoid having screened-in individuals who might have no resettlement option and, because of their mandate status, no means to be repatriated either. This was a concern because individual resettlement countries’ criteria for family reunification could differ from UNHCR’s criteria. In Malaysia, there was strong support from the Malaysian government and the U.S. embassy to resolve family unity cases. UNHCR officials identified and forwarded 36 cases in early 1995 to the U.S. embassy for consideration. The U.S. embassy in Kuala Lumpur agreed to review the cases informally and provide UNHCR with an indication of whether the cases might qualify for resettlement. As a result of this, the United States accepted 23 cases involving 35 persons. The cases were subsequently mandated by UNHCR and resettled in the United States. In Indonesia and the Philippines, UNHCR also identified several cases that met its family unity criteria and submitted these cases to the resettlement countries for informal review. With respect to U.S.-related cases, UNHCR forwarded 13 cases from Indonesia and 23 cases from the Philippines to the respective U.S. embassies in late 1995. In contrast to the situation in Malaysia, however, there was no progress in resolving these cases for resettlement because the U.S. embassies took no action on the cases. U.S. embassy officials did not informally review cases and took the position that there could be no review or implied guarantee of resettlement without a UNHCR mandate. However, UNHCR officials did not want to issue a mandate without a clear indication that the cases would be accepted for resettlement. The impasse over the family unity cases in Indonesia and the Philippines continued from late 1995 through April 1996, when the U.S. Department of State issued written guidance to the embassies. The guidelines indicated that cases should not be reviewed unless they were mandated by UNHCR. Even then, there would be no guarantee of resettlement until U.S. Immigration and Naturalization Service officials conducted an interview and then determined a case met U.S. immigration criteria. U.S. family unity criteria in some respects are more stringent than UNHCR criteria. According to Department of State guidelines, for example, spouses would only be considered eligible if “the marriage was legally established before release of the refugee screening result, the marriage is legally recognized in the country in which it took place, and there is clear evidence that the marriage is genuine.” These criteria effectively excluded marriages that occurred after a refugee status determination, even if there was evidence of a long-standing, preexisting relationship or common law marriage that occurred in countries such as Indonesia that did not recognize a marriage between asylum seekers. In Malaysia, cases similar to those that were submitted and approved by the U.S. embassy in early 1995 were rejected under the April 1996 guidelines. UNHCR officials in Indonesia and the Philippines effectively stopped submitting cases for consideration to the U.S. embassies due to the lack of response from the United States to review cases informally prior to a declaration of mandate status. As efforts to close the camps increased after the March 1996 announcement by the CPA countries, UNHCR encouraged all individuals, including those considered for family reunification, to voluntarily return to Vietnam. According to Department of State officials, the April 1996 guidelines did not change U.S. policy but clarified the U.S. position on UNHCR mandates and the application of U.S. family unity criteria. The officials noted that this guidance had not previously been communicated formally to the embassies and that the embassies’ refugee officers had some discretion to work independently on CPA family unity issues. We reviewed 86 family unity cases in Indonesia, Malaysia, and the Philippines. UNHCR had generally assessed the cases in accordance with its established criteria and procedures, although there appeared to be discrepancies in the way some cases were resolved. UNHCR relied heavily on the biographical information collected from asylum seekers prior to the screening interviews. This information provided the names, relationships, dates of birth, and places of residence of the family members of each asylum seeker. Asylum seekers were also encouraged to inform UNHCR of any changes or updates to this information over time. In assessing requests for family reunification, UNHCR often interviewed asylum seekers and contacted the resettlement countries to obtain supporting information. While this information was for the most part comprehensive, we found that in some cases it was incomplete or was not updated when a marriage or birth of a child occurred. Almost all of the asylum seekers whose cases we reviewed had ties to relatives in the United States, but most did not meet UNHCR criteria. The main reasons included (1) post-refugee status determination marriages lacked evidence of a long-standing relationship or of any obstacles that prevented a marriage from occurring prior to the refugee screening, (2) children who were nonminors sought reunification with parents or siblings, and (3) family members were linked to ODP cases that were not split as a result of the refugee status determination process. While most post-refugee status determination relationships were rejected, UNHCR did deviate from its fairly consistent application of the criteria to support a few cases. In one case in Indonesia, UNHCR approved a family unity claim after examining numerous correspondence between the asylum seekers and their respective families, which indicated that the marriage was recognized by the families in Vietnam through a formal ceremony prior to the refugee status determination. In another case in the Philippines, UNHCR supported a couple seeking family reunification because written affidavits from third parties attested to the long-term relationship of the couple as well as long explanations by both spouses about their delay in getting married. As a rule, UNHCR rejected petitions to reunite either adult children with their resettled parents or individuals with family members who resettled through ODP, but it made some exceptions for compelling humanitarian reasons. One case in Indonesia, for example, involved an adult daughter seeking reunification with parents who were critically ill. UNHCR approved the case based on humanitarian concerns. UNHCR rejected reunification claims involving family members who left Vietnam under ODP because such cases, according to UNHCR, were not split as a result of the CPA refugee screening process. However, UNHCR in Malaysia did support several ODP-linked cases in which no more family members were in Vietnam. Similar cases in Indonesia and the Philippines, though, were generally not recognized by UNHCR. A case in the Philippines, for example, involved a 16-year-old unaccompanied minor who was assessed under UNHCR’s special procedures process. UNHCR determined that the best support structure for the child existed in Vietnam where the mother resided. Subsequently, however, the mother, who was the only immediate family to the applicant, migrated to the United States under ODP. When the case was reviewed again under UNHCR’s family unity exercise, the situation with the applicant’s mother was not an overriding factor and the applicant was considered to have “aged-out” as a minor and was rejected as an adult. In situations involving siblings, a few cases were screened differently. In one case in Indonesia, four siblings (including a minor) arrived together at the first asylum camp. Each sibling was screened separately and all except one was recognized as a refugee. Upon appeal, the review committee used the principle of family unity to reverse the first instance decision and grant refugee status to the remaining sibling. In another case involving a minor and two siblings, each was screened separately. While the minor was granted refugee status under the special procedures process, the two adult siblings who accompanied him were rejected. In a few cases, we had information (provided by your office) supporting a claim for family unity that UNHCR did not have in its files. In the Philippines, for example, UNHCR rejected a post-refugee status determination marriage where no evidence of a genuine relationship was presented. After we presented a copy of a birth certificate of a child born to the couple, UNHCR officials indicated that based upon this new information, the case probably would have been forwarded to the U.S. embassy for consideration as part of the family unity exercise. The case, however, would probably not have been resettled since the U.S. embassy did not respond to the other cases forwarded for review by UNHCR. In several other cases, UNHCR had information that was not in the case file information we had received through your office. We reviewed several cases where one of the parties to a family unity claim had a preexisting marriage or had established his or her refugee status through a marriage to a different spouse. Some of the cases were extremely difficult to sort out due to the multiple relationships that were involved, linking partners in Vietnam, the first asylum camp, and the United States. It was not unusual to have a situation, for example, of a couple forming a relationship in a first asylum camp while each still had a prior spouse in Vietnam. Subsequently, one partner would be screened in to resettle with his or her spouse who immigrated to the United States through ODP. The partner then divorced the first spouse and sought reunification with the other partner still in the first asylum camp. Victims of violence is a broad term used to describe cases of individuals who asserted they had experienced traumatic or violent incidents en route to or in first asylum countries. Though the full scope is unknown, many Vietnamese boat people came under attack from pirates who were in most cases opportunistic fishermen who viewed the fleeing Vietnamese with their life possessions as easy targets of opportunity. Many individuals reportedly perished during these attacks. Women and young girls were especially vulnerable to sexual assault and rape. Other reported incidents of violence occurred at islands in the South China Sea, such as Terempa and Kuku. Some asylum seekers who landed on the islands in search of temporary refuge experienced rape, robbery, and beatings at the hands of soldiers and gangs of fishermen who sometimes congregated there. In other cases, boats were reportedly towed to the islands for the express purpose of victimizing the asylum seekers. Some asylum seekers endured multiple attacks and rapes during their escape attempt. UNHCR first developed guidelines for handling survivors of violence cases as an internal memorandum in June 1990 and formalized them in its November 1992 “Guidelines on Special Procedures under the Comprehensive Plan of Action.” These two documents outlined the criteria and rationale for including victim of violence cases in a process known as “Special Procedures.” Special Procedures was designed as a separate process to deal with unaccompanied minors and other vulnerable persons such as victims of violence. The standard for determining whether asylum seekers who had experienced violence should have been handled under Special Procedures was “the effect on their ability to understand persecution or articulate a well-founded fear of persecution more than the disability per se . . . .” It was recognized that individuals who were victims of violence may have been severely traumatized and unable to comprehend the screening process or articulate their claim to refugee status. In such cases, it would have been inappropriate, if not impractical, to subject individuals to the rigors of the screening process. An important principle underlying the establishment and implementation of Special Procedures is the assessment of “best interest” of persons who are vulnerable and of humanitarian concern. The best interest determination was to be made on the basis of information derived from circumstances or conditions generally beyond what would necessarily be considered in determining refugee status. In determining a durable solution in the best interest of a vulnerable person, all circumstances, including events occurring en route to or in a first asylum country, particularly piracy attacks, were to be considered relevant and taken into consideration. When asylum seekers arrived at a first asylum camp and identified themselves as victims of violence, or in cases where UNHCR initiated the identification of the victim, a UNHCR social service counselor would first examine the individuals to determine whether they could articulate their claim to refugee status. If they could, they would go through the usual refugee status determination procedure. If they could not, due to the traumatizing nature of the experience, the Special Procedures process would be used. Under Special Procedures, the question of a person’s possible refugee status was dealt with first. According to UNHCR, refugee status under Special Procedures was evaluated in a supportive environment that specifically considered a person’s difficulty in articulating his or her case. A person determined to be a refugee would be resettled. If a person was determined not to be a refugee, the best interest test was applied. The Special Procedures process was implemented by a Special Procedures Committee whose membership varied from country to country, but usually involved individuals from UNHCR’s implementing partners who possessed either a social service or status determination background. In Malaysia, for example, the Special Procedures Committee was variously composed of officials from the Red Cross and Red Crescent Societies, social counselors on loan from the Jesuit Refugee Service, UNHCR, and a private practice Malaysian psychiatrist. The role of the Special Procedures Committee was to determine where the best support structure resided to help individuals recover from their traumatic experience. In some instances, resettlement with family members in third countries was the best solution. However, according to UNHCR officials, the generally preferred solution, in keeping with social welfare principles, was to reunite the individual with family members in Vietnam. If asylum seekers did not disclose the violent experience either when they arrived at the refugee camp or during the refugee status determination process, UNHCR assessed each situation on a case-by-case basis. UNHCR officials told us it was not uncommon for individuals to initially keep their experience of violence secret due to shame or fear of retribution from country-of-asylum officials. They said many individuals began coming forward with claims of violence after receiving negative screening decisions and learning that other individuals with similar experiences were being resettled after proceeding through the Special Procedures process. Others, though, may have come forward because they experienced difficulties in coping with the effects of the earlier incident of violence. When evaluating these types of cases, UNHCR’s social service counselors were expected to look for symptoms of trauma, such as visits to the camp hospital or counselors or an inability to forge relationships with other camp residents. If trauma was evident, counselors would refer the case to the Special Procedures Committee for a best interest solution. We examined the case files of 5 Malaysian and 77 Indonesian victims of violence. The majority of the Indonesian cases were at Kuku Island, the northern island army camp. Because we did not interview the asylum seekers, social service counselors, or members of the Special Procedures Committees, who had disbanded at the conclusion of the screening process, our review was limited to determining whether the documentation in UNHCR files indicated that the procedures had been followed, not the quality of the assessments per se. UNHCR documents indicated UNHCR’s social service counselors interviewed and assessed the victim of violence cases and then assigned the case to proceed either through normal refugee status determination processing or to the Special Procedures Committee process. The assessments discussed the individual’s current mental state, situation in camp, and ability to understand and articulate a claim of a well-founded fear of persecution. Of the five Malaysia cases we reviewed, four were referred to the Special Procedures Committee for a best interest decision and the fifth was referred to the regular refugee status determination process. It was decided in two of the cases that the best support structure for the individuals lay with family members who resided in Vietnam. In the other two cases, the best support structure was determined to be with family members who lived in the United States and Australia, respectively. In Indonesia, the 77 cases we reviewed were processed through the normal refugee status determination process at the recommendation of the social service counselor. Although we did not track the final disposition for all cases, several were granted refugee status and were subsequently resettled in third countries. We noted a few cases in Indonesia where the social service counselor described emotional difficulties by the asylum seeker but nonetheless recommended that the normal refugee status determination process be followed. For example, in one case, the social service counselor wrote that “. . . appears very depressed and complains having suffered from a variety of psychosomatic illnesses . . . . experienced a horrific experience during her journey to Galang. However, there is evidence that she is on her way a full recovery. It’s recommended that she should go through the normal refugee status determination process.” Although this kind of recommendation appeared consistent with the standard for determining whether someone should go through Special Procedures, we still had some difficulty understanding it in view of the counselor’s observations about the emotional condition of the individual involved. “A husband and wife reported they were victims of violence as they traveled from Vietnam to asylum in Malaysia. The husband died in camp (due to causes unrelated to the violence incident). The woman was assessed by the social service counselor to be unable to understand or articulate a claim and her case was assigned to the Special Committee for a durable solution. The Special Committee decided that the woman’s best support structure lay with her husband’s family in Vietnam. However, after reaching this decision and before the woman returned to Vietnam the family had resettled in the United States under ODP. The Special Committee then decided that the woman’s ’best interest’ still lay with the husband’s family in the United States. Thereafter, the women was eventually accepted for resettlement in the United States and reunified with her husband’s family.” A majority of the victim of violence cases we examined from Indonesia occurred at Kuku Island. Information in the case files we reviewed indicated that a number of women and girls were sexually assaulted and raped by government soldiers. Men who attempted to intervene to protect their wives, children, or siblings were beaten. Some of the individuals who experienced violence at Kuku Island were processed through Special Procedures, where it was determined that resettlement in third countries was in their best interest. The majority, including the cases we reviewed, were assessed through the normal refugee status determination process. Some Vietnamese advocacy groups and others have criticized UNHCR’s handling of the Kuku Island cases. They have argued that (1) an agreement existed between the Indonesian government and UNHCR to resettle all the victim of violence cases and (2) all similarly situated cases should be treated alike. According to UNHCR officials we interviewed, there was no agreement with Indonesia to resettle all victim of violence cases. UNHCR initially resettled a number of these cases because of humanitarian concerns that may have left an impression of precedent for other cases. We found UNHCR handled these cases consistent with the “Guidelines on Special Procedures Under the Comprehensive Plan of Action.” To qualify for refugee status, asylum seekers had to demonstrate a well-founded fear of persecution. We reviewed 74 refugee status determination cases and discussed them with UNHCR officials and others involved in the CPA program. Procedures in each of the countries we visited were designed to help ensure that those with strong refugee claims would be recognized as refugees. Most of the screened-out cases we reviewed did not appear to have strong claims based on the case file evidence we examined. However, in some cases we identified issues that pointed to possible differences and inconsistencies in the way screening procedures may have been implemented. The limitations on our access to documents and our inability to interview asylum seekers preclude us from concluding with certainty whether these issues may have contributed to unfavorable screening decision outcomes in these cases. (See apps. I through III for our review of merit cases in the Philippines, Indonesia, and Hong Kong, respectively.) In reading the UNHCR case files, we noted considerable variation in the quality of the information presented regarding refugee claims and screening officials decisions. Although many of the case files were well-documented and contained detailed case histories with clear and logical explanations for the refugee status decisions, others were less complete and decisions did not appear to be well-supported by the recorded facts. Some case files had inconsistent or contradictory remarks by the screening interviewers. Such inconsistencies in the case file documentation often prevented us from concluding whether a screened-out decision was the result of poor record-keeping or it properly reflected the facts of the case. Case file documentation was particularly important because adjudicators at the appeals and mandate review stages relied on the case file record for their deliberations. According to Hong Kong officials, 25 percent of the appeals cases were reinterviewed, but we were told that few, if any, reinterviews occurred during appeals in Indonesia and the Philippines. A few of the most difficult case files to assess involved screening decisions that focused on the credibility of the applicant’s claim for refugee status. In these cases, screening officials seemed to place great emphasis on inconsistencies that appeared in the applicant’s claim and/or appeal. Their attention seemed to focus on relatively small details regarding a claim, such as the dates of noncrucial events and statements of when and where something may have happened many years ago, rather than on the major factors addressing the claim of persecution. In other cases where credibility issues were the principal reason to screen out an asylum seeker, the screening official presented convincing evidence that challenged key aspects of a case. For example, in one case in the Philippines, the asylum seeker claimed to have served several years in a Vietnamese prison under harsh conditions and away from his family. However, information in the case also indicated that the individual had fathered two children with his wife during the same period and the prison release documents appeared to have been tampered with. Other issues that surfaced during our review dealt with potential difficulties asylum seekers may have had in presenting their cases. A few of the appeal petitions submitted by asylum seekers in the Philippines and Hong Kong, for example, raised concerns about the relatively small amount of time spent by screening officials in conducting the first instance interviews. Communication difficulties may have occurred. Several appeal petitions in the Philippines and legal briefs presented by attorneys in Hong Kong criticized the quality of the translations conducted by interpreters. In a few cases in Hong Kong, interpreters may not have been able to translate the Nung ethnic dialect spoken by the asylum seeker. A further issue that was reported to us was a practice used in the early stages of screening in the Philippines where asylum seekers were asked to sign a blank record of their interview before it was written up. As a result of this practice, asylum seekers had no assurance that the information they had presented in the interview was accurately recorded. According to UNHCR officials, this practice occurred in some cases during the first year or so of screening; however, it was subsequently changed and asylum seekers signed only completed write-ups. Refugee status determinations inherently involve judgment on the part of the screening official. As a result, some differences in screening decisions are to be expected. Some cases we reviewed appeared to have similar facts and elements of a claim but were assessed differently by screening officials. In the Philippines, screening rates among first instance screening officials varied widely, according to UNHCR data. The overall screened-in rate at the first instance stage was 43 percent—the highest screened-in rate among all the first asylum countries. However, some officials were very lenient and consistently screened in a very high percentage of cases (75 percent and higher), and others were quite stringent and screened in far fewer cases (25 percent or less). UNHCR officials said that a number of weak cases probably were screened in, but they maintained that the appeals process and UNHCR’s own mandate authority helped ensure that individuals with strong refugee claims would be recognized and accepted for resettlement. In providing oral comments on a draft of this report, UNHCR and Department of State officials generally concurred with the report’s content. They provided technical and clarifying comments that we have incorporated in the report where appropriate. As agreed with your office, the focus of our work was on selected family unity, victim of violence, and general refugee merit cases of individuals who were screened out under the CPA program. Our approach was to conduct case file reviews to assess the strength or weakness of the claims that were made and determine how the screening process worked in these cases. We concentrated on cases from Indonesia, Hong Kong, Malaysia, and the Philippines, and selected them from among the approximately 500 cases provided through your office. Our review was mainly limited to an examination of UNHCR case files in the first asylum countries. The government in each country we visited denied our requests to visit the camps and interview asylum seekers and would not grant us access to their own case files. Officials from these countries expressed concerns that our presence in the camps might raise false expectations among the asylum seekers that the U.S. government was pursuing a rescreening of cases. We did not have the authority to require other governments to provide us access for interviews or review case files under their jurisdiction, and we had to rely on the willingness of host governments to grant us access. The lack of access to host government case files was a significant limitation on our work because the government of the first asylum countries were responsible under the CPA for the refugee status determination process that involved both the initial interview and the appeals process. Information contained in these files was often not available in the UNHCR files we were permitted to examine. However, to supplement our review of cases, we did meet and discuss the CPA screening process with officials from UNHCR, the first asylum governments, and the U.S. embassies. In addition, we interviewed representatives from nongovernmental organizations such as legal assistance groups who were involved with the CPA. To learn more about the CPA screening criteria and procedures, we reviewed available UNHCR documents, met with UNHCR officials in Geneva, and participated in a 2-day briefing with the UNHCR regional coordinator of the CPA program and other staff in Malaysia. The amount and focus of our fieldwork differed in each country, given the number and type of cases we had to work with and the existing time frame established to conclude the CPA program in each country. We reviewed 242 cases in the 4 countries we visited during a 3-week period from late June to early July 1996. (See table 1.) The cases covered individuals who were still in the camps at the time of our visits as well as those who had already returned to Vietnam. We prioritized the workload by first reviewing family unity and victim of violence cases and then the general merit-type cases. We assessed cases from the perspective of the CPA criteria, reviewed factual information in the case files, and sought to examine how the screening process was implemented. We read each case file, took appropriate notes on information in the files, and discussed cases with available UNHCR staff and among ourselves. The type and quality of information included in the UNHCR case files varied both across and within the countries we visited. In Hong Kong, for example, we only had access to that portion of the case files belonging to UNHCR. We were permitted to read UNHCR mandate review documents that pertained to a case but generally not other documents produced by and belonging to the Hong Kong government, such as the first instance and review board interviews and decisions. While the mandate review documents usually included summaries of what occurred at earlier stages of the screening process, they lacked many important details about how asylum seekers presented their cases or the assessments by Hong Kong officials. However, in Hong Kong, because of the extensive legal assistance available to asylum seekers, we were also able to collect from these sources quite detailed information about some individual cases. In Malaysia and the Philippines, we had access to all the documents contained within the UNHCR files, regardless of whether they were produced by UNHCR or the first asylum government. However, these files usually did not include appeal decision assessments. Also in Indonesia, the case files did not include documents generated by the Indonesian screening officials and belonging to the Indonesian government. Indonesia did not permit nongovernmental organizations to participate in applicant counseling, as the Hong Kong government did; consequently, this source of information was not available in Indonesia. The written material in UNHCR case files also limited the conclusions we could draw about individual cases. While many of the case files were quite detailed, some had insufficient information to allow us to determine the appropriateness of an applicant’s claim for refugee status or to understand what rationale or reasons an interviewer used in making a decision. This does not mean that the process was deficient in these cases or that an inappropriate decision was reached; it only means that the files we were permitted to examine may have been incomplete. Nonetheless, as a result, it was difficult to differentiate whether the strength or weakness of a particular case reflected the write-up of the case or the actual facts and presentation of the case during the refugee status determination. Due to the variance in the number and type of cases presented for our review in the countries we visited, our detailed discussion of cases in appendixes I, II, and III vary as to form and content. For example, due to our familiarity with the screening process in Hong Kong (based upon our prior work), the relatively smaller number of cases to review, and the larger volume of case file data, we are able to present a fuller discussion of the asylum seekers’ claim and the basis for their decisions. Since we did not visit the Philippines during our previous work on the CPA, our discussion focuses on both the screening process and case examinations. Indonesia had the largest number of cases we reviewed, the majority of which were victim of violence and family unity cases. We focused primarily on whether these cases appeared to be adjudicated properly based upon family unity and Special Procedures criteria. Subsequently, we also reviewed the cases based upon general refugee merit criteria, which is the basis of our case presentations. In a letter dated December 10, 1996, you raised some concern about the findings of this report and the reasonableness of its conclusions. We have attempted to clarify the information in this report where appropriate, and to further describe the scope limitations placed on our work. Because our office could not make independent findings of fact, we could not draw conclusions about individual cases. In addition, because we reviewed only a limited number of cases, our findings cannot be generalized to other cases or be used to judge the overall reasonableness of the CPA screening process. We conducted our review from April to September 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairmen and Ranking Minority Members of the House and Senate Committees on Appropriations, the House Committee on Government Reform and Oversight, the Senate Committee on Governmental Affairs, and other interested committees; the Secretary of State; the Attorney General; the U.N. High Commissioner for Refugees; Congressmen Tom Davis and Benjamin A. Gilman, and Congresswoman Zoe Lofgren of the House of Representatives because of their expressed interest; and others upon request. If you or your staff have any further questions concerning this report, please contact me at (202) 512-4128. Major contributors to this report were Patrick Dickriede, Le Xuan Hy, John Oppenheim, Audrey Solis, and Thai Tuyet Quan. Of the 66 cases we reviewed in the Philippines, 44 were merit cases. Among these were 11 cases of individuals who had been screened in as refugees. This provided us a limited opportunity to compare the relative strengths of screened-in and screened-out cases. Because these cases represent only a fraction of the thousands of adjudicated cases, we cannot draw any conclusions based on our review. Rather, our review demonstrates how judgments varied during the screening process. Before presenting our review of cases, we describe how the refugee status determination process was structured and note some general issues about its implementation. Beginning in March 1989, all asylum seekers arriving in the Philippines by boat were required to undergo screening in the Palawan refugee camp operated by the Philippine army and supported by the U.N. High Commissioner for Refugees (UNHCR). The Philippine government’s Task Force on International Refugee Assistance and Administration was charged with coordinating refugee activities with UNHCR and other international organizations. The initial refugee status determination screening at Palawan was conducted by nongovernmental legal consultants contracted by UNHCR. Using standard interview forms approved by the Task Force, the predetermination interviewer was to collect relevant information and any documents that were in the asylum seeker’s possession. The asylum seeker was to sign the finished interview forms and questionnaires. UNHCR then presented these forms and any documents provided by the asylum seeker to the Philippine government’s Commission on Immigration and Deportation for refugee status determination. A Philippine immigration official was to use the predetermination information to decide refugee status. The immigration official was to conduct his or her own interview to fully assess and evaluate the asylum seeker’s claim for refugee status, in accordance with UNHCR criteria. UNHCR provided an interpreter to translate for the asylum seeker. A UNHCR representative was to be present during the interview, although he or she did not participate in the proceedings. However, the immigration official and the UNHCR representative could confer with each other after the interview. Decisions were made in Manila, not in Palawan, and were to be based on CPA criteria for refugee status. When a member of a family was recognized as a refugee, immediate members of the family—spouses, minor children, and other dependents—were also to be recognized as refugees. Decisions were to be provided in writing no later than 2 months from the time the status determination interview was conducted. If refugee status was denied, the basis for the denial was to be documented in writing. A UNHCR representative was to present the decision to the asylum seeker for his or her signature and date. All decisions and pertinent records were forwarded to the Task Force. Asylum seekers could appeal a denial for refugee status by filing a notice of appeal with the Appeals Board, through the Task Force, within 15 days after receiving the decision. The appeal could also include a request to submit an extended written statement and supporting documents within an additional 15 days from the filing of the notice of appeal. If an asylum seeker did not file an appeal within 15 days of receiving the first instance decision, he or she was deemed to have chosen voluntary repatriation. The three-member Appeals Board was to resolve the appeal within 2 months after receiving the appeal or, when appropriate, the extended written statement. Appeals Board decisions were final. Beyond the appeal stage, UNHCR could exercise its mandate authority for granting refugee status to cases not screened in during the first instance or appeal phases. The Philippine government screened a total of 4,810 cases, which equated to 7,272 individual asylum seekers. Of this number, 2,087 cases (3,392 persons) were screened in as refugees at the first instance stage; and 2,723 cases (3,880 persons) were denied refugee status or “screened out,” for a screened-in case rate of 43.4 percent. An additional 351 cases (471 persons) were screened in when the Appeals Board overturned negative decisions. UNHCR exercised its mandate authority for an additional 13 cases, or 19 people, for an overall screened-in case rate of 50.9 percent. The Philippines’ overall and first instance screened-in rates were the highest of all countries of first asylum. The two-tiered screening process by the Philippine government and UNHCR’s mandate authority was intended to ensure that those with strong refugee claims would be screened in and recommended for third country resettlement. However, despite the relative generosity reflected in the screened-in rates, we noted in our reading of case files that some first instance decisions by immigration officials contained inconsistent or contradictory remarks, widely varying interpretations of country of origin information, or incomplete information. UNHCR reported that the criteria for determining refugee status was often inconsistently applied at the first screening decision level, and some Philippine government and nongovernmental officials also voiced these concerns. UNHCR statistics also revealed some wide variances of screened-in rates among immigration officers. (See table I.1.) (continued) Nonetheless, UNHCR and government officials stated that the appeals process and UNHCR’s mandate authority helped ensure that those with strong claims to refugee status would be screened in. UNHCR reported that it reviewed all cases screened out by immigration officials to ensure that no person with a well-founded fear of persecution would be denied refugee status. Our review of the appeals process was limited because the Appeals Board did not explain its decisions in writing. It was not entirely clear how the Appeals Board fully resolved discrepancies or credibility issues because it did not reinterview the appellants, although new or clarifying information was presented as part of some asylum seekers’ appeal petitions. According to a former member of the Appeals Board, members received the case files and any new documentation about a week before a board meeting. This official said that the Board would often delay a decision while awaiting additional relevant documentation. Since written explanations were not in the files, we could not determine whether or how the Board considered the material clarifications contained in the appeal submissions. According to a UNHCR report, rumors of corruption persisted during the first instance screening by immigration officials. UNHCR reported that its field office repeatedly encouraged asylum seekers to provide specific information about these charges, offering protection against reprisals, but no one came forward. Other officials also said that they had heard rumors of corruption, but these allegations had not been substantiated. In July 1995, an advocacy group published a report criticizing the screening process and citing 12 cases in which money or sexual favors were allegedly sought or given in exchange for positive refugee status decisions. The charges were directed against eight immigration officers and two UNHCR legal consultants. UNHCR reviewed the cases handled by these individuals and did not find any evidence of corruption. We did not evaluate UNHCR’s methodology for investigating the corruption allegations. UNHCR acknowledged that a number of weak cases had been screened in, creating an impression of unfairness for those with stronger claims who were unable to establish a well-founded fear of persecution due to a convention-related reason. UNHCR maintains that the appeals process identified those cases with a strong claim to refugee status and that the remaining deserving cases were accepted or identified for refugees status under its mandate protection. We were unable to validate these assertions. As discussed below, we identified some cases that appeared to have some strong or cumulative elements of persecution, but we could not conclude that they were adjudicated fairly or unfairly given the limitations of the information in the files and our lack of access to asylum seekers. Of the 33 screened-out cases we reviewed, most did not appear to have strong claims based on the evidence contained in the files. Our review indicated that six cases appeared to have some strong or cumulative elements of persecution, but we could not conclude that they were adjudicated unfairly given the limitations of the information in the files and our lack of access to the asylum seekers themselves. A more lenient officer might have screened in these cases. While we cannot conclude that strong cases were screened out, we believe that human judgment is an unavoidable variable in any refugee screening process in which the individual’s story is difficult to substantiate. Moreover, according to UNHCR guidance, the screening decision should consider the duration and the recency of the persecution as well as the cumulative nature of persecution. For example, a person could be subjected to many forms of persecution or harassment that are minor by themselves, but the cumulative nature over time may constitute convention-related persecution. Because there is no universally accepted definition of persecution, refugee status decisions depend greatly on the individual circumstances of each case and the likelihood of persecution if an individual returns. We noted some inconsistencies among immigration screeners and widely varying interpretations of country of origin information. For some interviewers, potentially persecutory actions taken by the Vietnamese government were simply “national policy.” For example, UNHCR guidance states that the duration and hardship incurred as a result of being sent to a New Economic Zone (NEZ) should be considered if the person was sent to a zone for convention-related reasons. Several case files of screened-out individuals described serious hardship in the zones, such as lack of medical care resulting in the death of a family member, that did not appear to influence the interviewer. In one screened-out case, the interviewer wrote, “They were not the only family which was sent to the NEZ but all the families who were once upon a time associated with the past regime.” This seems to indicate a decision to send families to the zones that went beyond the national policy of returning farmers to the countryside for food production. However, past persecution was normally not enough by itself to substantiate a well-founded fear of persecution upon return to Vietnam, and this individual’s claim for refugee status was denied due to lack of merit and credibility problems. UNHCR guidance also notes that, although it is a general legal principle that the burden of proof lies with the person submitting a claim, an applicant may not be able to support statements by documentation or other physical evidence in refugee status determination situations. In such cases, UNHCR recommends that “if the applicant’s account appears credible, he should, unless there are good reasons to the contrary, be given the benefit of the doubt.” In about half of the cases we reviewed, immigration interviewers and UNHCR legal consultants noted credibility problems, but we could not ascertain from the information provided whether the accounts were credible. Several immigration officers rejected claims partly because an individual or his or her family was able to obtain a family registration card (ho khau) issued by the Vietnamese government. Such registration is the first step for many basic rights, such as obtaining education, legal employment, business licenses, medical care, and ration cards for price-controlled food. Several asylum seekers asserted, however, that such registration was simply a means of controlling citizens and did not, by itself, guarantee ration cards or medical care, which they said they had been denied. In one case, an applicant claimed that, due to past political troubles with the Vietnamese government, the only way his family could get a family registration card was by paying a bribe. According to the interviewer’s written decision, that ability to bribe meant that the individual’s family must be well-off and not subject to persecution. His appeal was subsequently denied. Assessing the screening process for Vietnamese veterans was particularly difficult because it appears that almost everyone who was associated with the South Vietnamese or U.S. governments was subject to some form of punishment, such as “reeducation.” In most of the veterans’ cases we reviewed, the punishment occurred immediately after the communist victory in 1975 and tended to taper off during the 1980s. Also, the punishment often appeared light, such as reeducation for several days. However, there appeared to be some exceptions to this. For example, in one case, a husband had served in the South Vietnamese Army from 1967 until 1974. From 1967 until 1972, he was an interpreter assigned to the U.S. Army 517th Intelligence Unit, where he helped interrogate captured North Vietnamese. In the appeals documents he and his wife submitted, they noted that, among other things, a new police chief in their district had been interrogated by the husband during the war and severely beaten by U.S. intelligence officers. In denying his claim to refugee status, the interviewer recorded that the police chief had been assigned in 1980, and that nothing had happened to the couple in the intervening years prior to their escape. According to their appeal submissions, the police chief was assigned in 1988, not 1980, and consequently the husband fled, fearing persecution. The wife stated that she was detained and raped by the new police chief (“a mere abuse of police power,” granting that it was true at all, according to the interviewing officer). In addition, the husband claimed that he was involved in an anticommunist organization in 1987 and was shot while escaping from his mother’s house in Saigon. The Appeals Board upheld the first instance decision to deny him refugee status. He requested a mandate review by UNHCR, but the file did not indicate a response. Prior to our fieldwork, we obtained first instance screening decisions prepared by Philippine immigration officials for 177 asylum seekers and then blacked out all the decisions to conduct a test. A team of five GAO evaluators each reviewed a set of decisions and assessed whether the applicant had been screened in or out based on the information in the decision paper. In general, the team found that many of the screening decisions presented limited information about the asylum seeker’s claim for refugee status. The write-ups often lacked important details about the applicant’s background, situation in Vietnam, and reasons for leaving the country. Without such information, it was difficult to determine the relative strength or weakness of individual cases. In addition, many of the write-ups contained weak support or no explanation for the screening decisions made by the immigration officials. In reviewing cases, different members of the team often chose a decision different from the decision rendered, indicating that a good deal of subjective judgment may be involved in the adjudicator’s decisions. In the 177 screening decision papers, there were 7 in which the screening officer laid out clear and logical reasons for granting or denying refugee status. In a majority of the other cases, however, it was less clear from the write-ups why a particular decision had been reached. At least 24 cases appeared to be identical or very similar, yet received different decisions from different screening officers. It should be noted, however, that these decision papers were only one part of an applicant’s file and cannot be used to assess the credibility and reliability of the screening process, or compliance with international norms for refugee status determination. In Indonesia, of the 121 cases we reviewed, 11 were asylum seeker cases that underwent the regular status determination process. We also examined the 77 victim of violence cases (which after assessment for trauma by a social service counselor underwent normal refugee status determination processing) and 2 of the family unity cases to assess the strength of their claims for refugee status. Our review indicated that the large majority of the cases decided on merit seemed to have been adjudicated fairly and the decisions appeared reasonable based on the available case file information. A common element that ran through the case presentations by the asylum seekers was the harsh conditions and difficult economic situation present in Vietnam, especially in the late 1970s and early 1980s. Those asylum seekers who spent time in a NEZ seemed to have particularly difficult living situations. However, despite the difficult living conditions, the case file documentation appeared to lack persecutorial elements and did not present facts to support a well-founded fear of persecution based on race, religion, nationality, membership of a particular social group or political opinion. Of the 11 merit cases we examined, 2 were screened in while 9 were screened out. Eight of the nine screened-out cases appeared to have been properly adjudicated based on information available in the case files. These cases failed to present convention-related claims, and the individuals appeared able to live tolerable lives. The case files for the two screened-in cases also indicated weak claims for refugee status and may have benefited from a generous application of the refugee criteria. The following six screened-out cases presented facts or issues reported by the asylum seekers that we believe may have merited further consideration or clarification. This case, involving an ethnic Khmer, included cumulative factors that may have supported a claim for refugee status based upon ethnicity and political beliefs. The asylum seeker’s father was arrested in 1982 and sentenced to 7 years in prison (where he died) for his affiliation with an antigovernment political party. While the father was in prison, the asylee’s family lost their family registration card and the children could not attend school. The individual was arrested in 1986 for antigovernment activities and imprisoned for 18 months. These cases involved a brother and sister who, as adults, were screened separately, in accordance with established procedures. Their father spent 5 years in a reeducation camp, and the brother spent 7 years in a NEZ. After returning from the NEZ, the brother was then arrested for printing Catholic religious materials and was imprisoned from 1986 to 1990. The sister was arrested for teaching the catechism and sentenced to 22 months of labor and was repeatedly summoned for questioning due to her brother’s activities. Upon release from prison, the brother and sister fled Vietnam. The legal consultant who reviewed the case noted in the file that the persecution for religious involvement was “remote in time” and recommended against granting refugee status. While conditions in Vietnam may have changed in terms of religious tolerance, we believe the length of incarceration for the applicants could be considered excessive and therefore warranted a more generous treatment. When we discussed these cases with a UNHCR official, he told us that, in hindsight, the decision might have been erroneous for this reason. However, the individuals were not eligible for mandate because the mandate exercise weighs factors and events at the time of review, not when the individuals initially fled Vietnam, and according to the UNHCR official, Catholicism is no longer persecuted in Vietnam. In this case, the asylum seeker’s father-in-law was detained in a reeducation camp for 9 years. The asylum seeker was jailed for 26 months, according to the legal consultant’s notes, for illegal peddling. However, in her appeal, the asylum seeker linked her imprisonment to her husband’s political activities and the family’s adverse background. The claim was discounted during the first instance interview because she could not document her imprisonment. The Appeals Board rejected her appeal citing lack of new information. The length of the imprisonment raises the question whether the asylum seeker’s political background may have been a factor in the sentencing. These two cases involve the credibility of the applicants’ claims for refugee status. Based upon biographical data and the interview instruments, both applicants appeared to present strong claims. However, in one case, the reviewing legal consultant doubted the credibility of the asylum seeker’s claim that he spent 3 years hiding in the compounds of two churches after the presiding priest of the church where he taught the catechism was arrested. According to the case file, the interviewer reasoned that no church in Vietnam would harbor possible criminals. In the other case, the legal consultant questioned the asylum seeker’s claimed link to an antigovernment group (FULRO) based on the applicant’s ethnicity. Although FULRO usually consisted of members from minority tribes in central Vietnam who sought to establish an autonomous region, it was not clear that the group denied membership to those from other ethnic groups, such as the asylum seeker in this case. We examined 18 refugee merit cases in Hong Kong from among those provided through your office. The UNHCR case files we had access to did not include the interview and appeal information compiled by Hong Kong authorities. They did, however, contain detailed UNHCR mandate review assessments and other information, including submissions by the asylum seekers and/or their lawyers. We also obtained additional documents from attorneys who represented many of the cases we reviewed. In addition, we held lengthy discussions with UNHCR staff and some asylum seekers’ attorneys. Due to the smaller number of cases, the greater amount of case file information, and the type of cases we examined, we were able to spend more time and learn more about each of the cases in Hong Kong than in Indonesia and the Philippines. After reviewing available information, we had a number of questions about the application of the screening criteria in the cases we reviewed. However, because we lacked access to the asylum seekers and to all the components of each case (and we did not seek to adjudicate cases), we could not conclude whether Hong Kong and UNHCR officials had assessed the cases appropriately. In the remainder of this appendix, we present 12 cases that highlight issues involving (1) the manner in which interviews were conducted; (2) different interpretations of the screening criteria, such as the use of country of origin information; (3) communication difficulties resulting from poor translations by interpreters; and (4) judgments made about the credibility of cases. Unless otherwise indicated, the sources of the factual information are representations of the asylum seekers or their lawyers. The asylum seeker in this case reported that he was persecuted for his commitment to the Catholic Church. In Vietnam, he studied to become a priest but claimed he was denied admission to a university because of his family’s religious background. The asylum seeker was arrested and charged with sabotage for harboring a Catholic seminarian who was trying to escape the country. He was imprisoned and subsequently escaped. Hong Kong screening officials challenged his credibility regarding how he escaped from jail but did not question his involvement with the Catholic Church. According to a report from a Catholic chaplain working in the Hong Kong camp, the asylum seeker had been active in religious organizations in the camp. His lawyer reported that the Catholic Diocese of San Francisco sponsored him for a special religious immigrant visa to work in a church in the United States. The asylum seeker also expressed concern that he would be rearrested for sabotage if he returned to Vietnam. We discussed this case with UNHCR officials who thought that the case could qualify on humanitarian grounds, and they subsequently informed us the asylum seeker was screened in under UNHCR mandate in November 1996 based upon new information submitted on the case. The asylum seeker reported that his father was a high ranking civilian in Da Nang and an official in an anticommunist party during the war. When Saigon fell, his father was sent to a reeducation camp for nearly 5 years, and then to a NEZ with his family. Although the asylum seeker completed high school in 1975, he claimed he was not permitted to take the university entrance exam. He was required to do forced labor in the NEZ but was allowed to stop after being injured in an accident. Because he did not have a registration card, he supported himself in a variety of odd jobs. According to the appeal petition filed by his lawyer, the asylum seeker joined an anticommunist group in 1976. When the group was discovered, he attempted to leave Vietnam but was caught and imprisoned. After 2 years, he escaped and subsequently joined another anticommunist group in 1982. When this group was discovered a few years later and some members were imprisoned, the asylum seeker went underground. He joined a third anticommunist group in 1987 that sold illegal music in the black market, some of which contained antigovernment themes. After authorities reportedly began cracking down on such groups in 1988, the asylum seeker and his wife escaped to Hong Kong to avoid arrest. The asylum seeker continued to be politically active in Hong Kong, opposing conditions in the camp and the forced repatriation of asylum seekers to Vietnam. In reviewing the case for possible mandate, the UNHCR reviewer determined that the claim lacked convention-related persecution and had credibility problems. In this regard, UNHCR found that the asylum seeker’s family had obtained legal registration by 1988, thereby demonstrating that the family had reintegrated into Vietnamese society. Furthermore, the UNHCR reviewer found it implausible that the asylum seeker would join another political group so soon after escaping prison, and considered selling antigovernment materials in the black market to be a criminal offense that was not convention-related. Finally, the reviewer found that the asylum seeker’s political activities in Hong Kong were directed against the screening process and were not convention-related activities. Notwithstanding the UNHCR’s initial determination, because of the complexity of the case, UNHCR informed us that it would fully review the case again. In November 1996, the asylum seeker was screened in under UNHCR mandate. According to the asylum seeker, his father was a counterintelligence officer in the South Vietnam Army (ARVN) from 1954 to 1975 and his mother worked at a large American base in Da Nang. His father was in a reeducation camp for 3 years, then sent to a NEZ with his family after their home was confiscated. In 1981, the family fled the NEZ and the children were denied the right to attend school. The family was also subjected to weekly public humiliation sessions that were intended to force them back to the NEZ. The following year, when the asylum seeker was 17 years old, he and another individual were implicated in an event in which a policeman was killed. He tried to escape from Vietnam but was caught and sentenced to 3 years in prison on charges of aiding an anticommunist group and participating in the death of the police officer. He failed in an attempted prison escape and spent more than 7 years in prison. The Hong Kong review board, which interviewed the asylum seeker, found that neither he nor his family suffered convention-related persecution despite facing discrimination for convention reasons. It did not find the asylum seeker’s account of his arrest for the death of the policeman to be credible and concluded that if he had been responsible for the policeman’s death, he would have been charged with murder or at least manslaughter. The board also concluded that the asylum seeker had embellished and fabricated this aspect of his claim. The UNHCR review of the case essentially agreed with the review board decision, finding serious credibility concerns. The UNHCR reviewer thought it credible that the asylum seeker spent considerable time in prison and possibly suffered mistreatment, but the imprisonment resulted from a common crime and was not convention-related. We questioned the completeness of the first interview with the asylum seeker that was conducted only 1 day after he arrived in Hong Kong, but UNHCR said that the first interview was part of the material and evidence offered by the asylum seeker and was properly considered. UNHCR also noted that the asylum seeker gave totally different accounts to the interviewing officer concerning the reason for his imprisonment and omitted or changed circumstances in his life story. UNHCR concluded that the benefit of the doubt principal did not apply since the asylum seeker’s account lacked coherence and plausibility, and ran counter to generally known country of origin information. UNHCR said that the asylum seeker might return to prison if repatriated, but the evidence suggested the imprisonment would not be for a convention-related issue. The asylum seeker claimed that her father, a lieutenant in the ARVN, died after being captured by the communists in 1975. Her family’s property was confiscated, and she was resettled in a NEZ with her grandmother and mother. Her grandmother contracted malaria and died, and the asylum seeker also became ill with malaria so her mother took her out of the zone illegally. The asylum seeker reported that her mother could not provide for her, so she was sent to live with a family friend and former military comrade of her father’s, and her mother disappeared. Because of her illegal residence, the asylum seeker was not allowed to attend school. She helped the new family with an illegal vending business, but was caught and sent to a youth detention center due to her age (15) at the time of her arrest. She escaped to Hong Kong in January 1991 when she had just turned 16. Her lawyer provided us with a copy of the review board’s 1991 decision, which challenged her credibility because of several contradictions in the record. The board decision noted that “there is no country of origin information that the Review Board is aware of which supports the proposition that children of ex-ARVN soldiers are systematically discriminated against or persecuted in present day Vietnam.” The asylum seeker’s lawyer, however, submitted to UNHCR the following excerpt from the Country Reports on Human Rights Practices for 1992 issued by the U.S. State Department: “Family members of former South Vietnamese Government and military officials . . . have been systematically discriminated against.” UNHCR also reviewed the case and, while noting that the asylum seeker “led a miserable existence,” concluded that her life had improved after she was sent to live with her father’s friend. In the UNHCR assessment for mandate review, the reviewer also used country of origin considerations to decide that the asylum seeker had been treated like any other homeless child. The reviewer “ not find any discrimination suffered amounts to persecution” even if benefit of the doubt were given to claims. According to the case file, the asylum seeker served with the U.S. Army from 1963 to 1975. He was sent to a reeducation camp for 3 days after the war and then to a NEZ with his family. His wife and children contracted malaria and were allowed to leave the NEZ for treatment. Two years later he joined his family without government approval and the family lost their ho khau. As a result, his children were denied public education for 10 years and had to perform forced labor at regular intervals. In 1988, the asylum seeker received a ho khau but also lost his job when the factory where he worked became state-run. After his ho khau was reinstated, the asylum seeker’s children were allowed to pursue their education again. Because he had helped protest the factory takeover, he was detained for 2 months and required to report regularly to the authorities until 1991. In 1989, while serving on the board of directors of a school, he protested a policy change and was subjected to more forced labor. In 1991, he was arrested for illegal residency and although he was released after he presented his ho khau, he was required to report every week to the authorities. UNHCR reviewed this case and concluded that the asylum seeker’s military background was remote in time and that the difficulties he had encountered did not amount to persecution. UNHCR indicated that the asylum seeker’s illegal residency made it difficult for him to obtain a ho khau. UNHCR also noted that the issue of ho khau is no longer a problem as the implementation of the CPA ensures reinstatement of a ho khau to all returnees. Furthermore, the factory protest was viewed as a public disturbance and not convention-related persecution. The asylum seeker served in the South Vietnam Army from 1960 to 1969 and left because of injury. He was the district security leader in Da Nang from 1970 to 1975, a nonmilitary governmental position. The asylum seeker was in a reeducation camp from 1975 to 1976 and 1978 to 1981, and his home was confiscated. He belonged to an antigovernment religious group from 1981 until he left Vietnam in 1990 due to fear of arrest. In conducting a mandate review of this case, the UNHCR legal counselor concluded that the asylum seeker “should be recognized on account of political opinion,” but another UNHCR eligibility counselor disagreed as he found the case too doubtful to apply the benefit of the doubt principle. As a result, the legal counselor reinterviewed the asylum seeker and determined again that a favorable decision should be made. Due to the difference of opinion, UNHCR’s Assistant Chief of Mission reviewed the case and decided that “the asylum seeker could not be granted the benefit of the doubt due to irreconcilable credibility problems which were material to the claim.” This case was unusual in that a month after the asylum seeker’s interview with the review board, he was recognized as a refugee and moved to a refugee transit camp. According to his lawyer’s submission, the asylum seeker was informed 17 days later that he was not a refugee and had to return to the asylum camp. We did not see any record in the UNHCR file explaining why this situation occurred. UNHCR informed us that “it was an administrative error” and was “amended as soon as possible.” The asylum seeker was a member of a prohibited religious sect in Vietnam and claimed to have been a victim of religious persecution. He met a publisher of religious books in the summer of 1990 in Vietnam and was introduced to a book published by the Ching Hai group in Taiwan. The asylum seeker introduced the book to his father, who had become a Buddhist monk a few years earlier. His father also liked the book and distributed 100 copies to his followers and introduced them to the Ching Hai philosophy. The Ching Hai group has been described as religious but also critical of the Vietnamese government. The asylum seeker reported that he assisted in the printing of the book and in September 1990, he and his father and several others were arrested for “propagating anti-government material.” The asylum seeker subsequently was able to escape and flee to Hong Kong while his father remained in prison. The Hong Kong Review Board’s decision stated that “there was no information that Buddhism followers were being suppressed by the Vietnamese authorities at the moment or would be suppressed upon their return.” UNHCR, however, provided us information that “the Ching Hai religious sect is prohibited in Vietnam and can only be practiced by its followers in private. There can be repercussions if the faith is practiced in public which may involve questioning by police, confiscation of material, or threats of further problems.” UNHCR indicated though that it was not aware of any person who had been sentenced or arrested for following Ching Hai. The asylum seeker reported that he went to a high school military academy, then entered the South Vietnamese army. In 1975, he was imprisoned for not having a military identification card and was sent to a reeducation camp for 10 months, and then sent to a NEZ. In 1985, he claimed he went to Cambodia and joined an antigovernment group Ancien Enfants de Troup (AET). Subsequently, two other members of the group were arrested for distributing antigovernment leaflets. Fearing arrest, the asylum seeker fled to Hong Kong. A complicating issue in this case was determining when and on what basis statements by the asylum seeker were considered noncredible. According to UNHCR, the asylum seeker’s claim in this case was rejected because of inconsistencies in statements given by the asylum seeker about his activities with the AET, how he made contact with his wife, and statements provided by himself and his wife. According to information about the mandate review assessment, the asylum seeker did not mention his involvement in the antigovernment group during a preinterview counseling session but did so later during the status determination interview. When we asked whether it was appropriate to question credibility based upon what was not said in a counseling session, UNHCR responded that credibility is weighed based on all statements made by the asylum seeker. No distinction is made as to when or in which forum statements are made. UNHCR also noted that their case file records indicated the asylum seeker had stated in the prescreening interview that he was never involved in any antigovernment organization. However, the part of the case file we were able to review did not confirm this assertion. The asylum seeker’s father was a soldier in the ARVN and a driver for the U.S. military until 1974. In 1975, the father was sent to a reeducation camp but escaped after 1 year. He remained underground and informed his family that he was a member of an anticommunist group known as FULRO. He was recaptured in 1982 and imprisoned until being released in 1988. The asylum seeker was 13 years old in 1975, was not allowed to attend school, and, with other members of the family, was sent to a NEZ. The family left the NEZ after 10 days, and although they had no registration card, they worked in various farming and factory jobs. In 1985, the asylum seeker was arrested with his Kung Fu teacher, because the latter was supposedly involved in an antigovernment organization. He was imprisoned for almost 2 years for “intention to go against the government.” In 1989, he was arrested again because he was associated with another individual who belonged to an antigovernment group. According to his claim, the asylum seeker cut his wrist while in solitary confinement and was taken to the hospital, where he escaped the following day and then eventually fled to Hong Kong. In reviewing this case, we noted two issues: (1) language difficulties that appeared to complicate the case and (2) the manner in which the various screening interviews were conducted. The petition filed by the asylum seeker’s lawyer reported several language difficulties encountered by the asylum seeker, who is ethnic Nung and did not speak Vietnamese fluently. His case was rejected in large part because of inconsistencies presented in different interviews. He, however, complained even before his rejection notice that his request for the interviewing official to read back the interview had been denied. According to the lawyer’s petition, the interviewing official had not accurately recorded the asylum seeker’s claim. Notes taken by the UNHCR monitor at the Hong Kong Review Board interview indicated that the way the interview was conducted may have resulted in an inaccurate presentation of the asylum seeker’s claim. The UNHCR official noted that the interviewer “badgers, is hostile, imperious and almost deliberately misinterprets the . [The interviewer] had the irritating habit of repeating everything the [asylum seeker] said, but in a tone of disbelief.” UNHCR also conducted an interview for a mandate review. It was noted that “the language difficulties are still a problem as noted by the interpreter and may have led to some material components of the claim being missed at the interview.” UNHCR maintained that in the mandate review, the legal consultant clearly understood and recorded all facts. Even though discrepancies due to communication difficulties were discounted, the asylum seeker’s claim for refugee status still had major inconsistencies that raised credibility doubts. The asylum seeker said that he served in the ARVN from 1970 until he was discharged due to battle wounds in 1973. In 1975, he went to a reeducation camp for 2 months and then was sent to a NEZ with his wife and three daughters. According to the asylum seeker, the family faced severe difficulties in the NEZ and all three of his daughters became ill and died. Although a health clinic was available, the asylum seeker reported that his family was denied access to it because of the family’s unfavorable background. The asylum seeker helped his brother, who belonged to an anticommunism group, deliver some documents and was arrested. He was released from prison in 1980 after a year after agreeing to serve as an informer. The asylum seeker and his wife then left the NEZ and attempted to escape from Vietnam rather than inform on his friends. The escape attempt failed and he was arrested and beaten so severely that his right leg became paralyzed. After a year in solitary confinement, he was sentenced to 7 years of reeducation for the political crime of “counter-revolution,” but was released in 1987 after 5 years with the condition that he could not leave his village for 3 years without permission. He was also forced to do unpaid labor and was not able to obtain a family registration card. The asylum seeker was detained by the authorities supposedly for antigovernment activities two more times during 1988 and 1989 but was not tried. In 1990, he helped four others write an anonymous letter complaining about corrupt local officials. Fearing that the authorities had learned of his involvement, he escaped with his family to Hong Kong, while the other four people were imprisoned from 2-1/2 to 4 years. Because he evaded arrest, he is afraid that he would be imprisoned if returned to Vietnam. The case appears to have been rejected because of inconsistencies and credibility issues raised in the screening interviews. The Hong Kong Review Board did not find the asylum seeker was persecuted in the NEZ; rather his experience was in line with national policy in Vietnam to redistribute population. The officials also apparently did not believe the asylum seeker’s accounts about the death of his children and about the various arrests and imprisonment he endured. The UNHCR reviewer found the events that occurred in the distant past to be believable but the more recent events were less credible and may not have been convention-related. As a result, the reviewer recommended that the claim be rejected. Regarding the possible imprisonment of the asylum seeker if returned to Vietnam, UNHCR considers the evasion of arrest by itself not to be convention-based persecution. This asylum seeker arrived in Hong Kong in 1991 at the age of 26. The year before, other members of his family, including both parents and three siblings, also arrived in Hong Kong. The other members of the family were granted refugee status, but the asylum seeker was rejected. We asked UNHCR whether the circumstances of the other family members should have influenced the asylum seeker’s case. The principal applicant in the other case was the asylum seeker’s brother who was 2 years younger. Since the brothers were close in age and relatively young such that it was unlikely that their personal histories differed too much, it was not clear why a different decision was reached in each case. UNHCR indicated that under the Handbook criteria, “the situation of each person must be assessed on its own merits.” We also found a letter from the asylum seeker to UNHCR in such poor English that the meaning was uncertain, and there were no notes attached regarding any attempt to clarify the meaning. UNHCR assured us, however, that “there were no language problems which affected the assessment of the case.” The asylum seeker is a member of a Nung ethnic tribe that was well-known for its anticommunist activities before and after 1975. He became interested in Christianity, completed a course at a missionary school, and became a pastor’s assistant. When the communists took control in 1975, the pastor fled and the asylum seeker conducted services for about 6 months. On hearing that a former classmate was arrested for conducting illegal religious services, the asylum seeker went into hiding and subsequently joined FULRO, an armed anticommunism group. He participated in many battles against communist military forces and was wounded in 1977. Afterwards, he was imprisoned and treated inhumanely, according to his claim. The following year he escaped from prison with a friend, assumed a fake identity, and worked on a farm for several years. In 1987, the asylum seeker became involved in a land dispute and was questioned by the local authorities. The following year, his fellow-escapee was arrested. Fearing arrest himself, he went into hiding and learned that his true identity had been exposed. As a result, he escaped to Hong Kong. In the camp, even before being interviewed, he “expressed concerns repeatedly about his ability to communicate in Vietnamese,” according to a UNHCR record. His case was screened out due to lack of credibility. We noted many contradictions in his file and three different versions of the same story. The asylum seeker’s lawyer told us that it was difficult for the asylum seeker to communicate with anyone, including his own lawyer. However, UNHCR officials reported that they had taken a number of measures to enable the asylum seeker to communicate clearly during the screening interviews, including counseling the asylum seeker on preparing his presentation and informing the Hong Kong screening officials about the asylum seeker’s language difficulties prior to his being interviewed. UNHCR officials said they would reassess the case and ask the asylum seeker about the different versions of his claim. However, the asylum seeker repatriated before the reinterview could occur. UNHCR planned to continue to monitor his reintegration into Vietnam. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the implementation and outcomes of the Comprehensive Plan of Action (CPA) for Vietnamese asylum seekers in Southeast Asia, focusing on how the refugee status determination process worked for family unit, victim of violence, and general merit cases by: (1) reviewing factual information about such cases from the perspective of international refugee criteria used under the CPA; and (2) examining how the screening process was implemented. GAO found that: (1) family unity has been an important principle throughout CPA implementation, yet advocacy groups, asylum seekers, and others have raised concerns that some families were unfairly separated during the refugee screening process; (2) the United Nations High Commissioner for Refugees (UNHCR) reviewed hundreds of screened-out cases to determine whether asylum seekers would qualify for resettlement according to established family unity criteria, but found most failed to meet the program criteria; (3) however, UNHCR identified a small number of cases in Indonesia, Malaysia, and the Philippines that met the criteria prior to UNHCR mandating the asylum seekers as refugees and forwarded 72 of these cases to U.S. embassies for resettlement consideration; (4) the United States initially accepted 23 of 36 cases for resettlement in Malaysia, but embassies in Indonesia and the Philippines refused to review 36 similar cases, since they were not first mandated by UNHCR; (5) victim of violence cases involved individuals who were physically assaulted on the way to, or upon arriving in, one of the first asylum countries, and according to UNHCR officials, many victims were unable to articulate their claim for refugee status, and UNHCR established special procedures to determine a durable solution in their best interest; (6) GAO's review of cases in Indonesia and Malaysia indicated that UNHCR and these governments followed established procedures for processing victim of violence cases; (7) GAO could not evaluate the quality of social service counselors' assessments of victims' ability to articulate a claim for refugee status, although the assessments described in some detail the individuals' mental condition, situation in camp, and ability to understand and present their claim for refugee status; (8) of the 74 merit cases GAO reviewed, it appears that most did not present strong refugee claims based on evidence contained in the files; (9) many case files were well-documented and presented detailed facts and logical explanations for decisions that were made, while others contained documents that pointed to differences and inconsistencies in the way claims may have been handled, such as incomplete documentation, poorly translated information, different interpretations of screening criteria, lack of legal assistance, and what appeared to be an overemphasis on nonessential points in assessing the credibility of an asylum seeker's claim; and (10) as a result, in some cases, GAO could not determine how well the case files reflected the presentation of the asylum seekers' claims.
You are an expert at summarizing long articles. Proceed to summarize the following text: Medicare provides health insurance for about 37 million elderly and disabled individuals. This insurance is available in two parts: Part A covers inpatient hospital care and is financed exclusively from a payroll tax. Part B coverage includes physician services, outpatient hospital services, and durable medical equipment. Part B services are financed from an earmarked payroll tax and from general revenues. The Social Security Act requires that Medicare pay only for services that are reasonable and necessary for the diagnosis and treatment of a medical condition. HCFA contracts with private insurers such as Blue Cross and Blue Shield plans, Aetna, and CIGNA insurance companies to process Medicare claims and determine whether the services are reasonable and necessary. The program was designed this way in part to protect against undue government interference in medical practice. Thus, despite Medicare’s image as a national program, each of the 29 Medicare contractors that process part B claims for physicians’ services generally establishes its own medical necessity criteria for deciding when a service is reasonable and necessary. Contractors do not review each of the millions of Medicare claims they process each year to determine if the services are medically necessary. Instead, contractors review a small percentage of claims, trying to focus on medical procedures they consider at high risk for excessive use. Contractor budgets limit the number of claims contractors can review, and over the last several years, both contractor budgets and HCFA requirements for prepayment review have been decreasing. In 1991, HCFA required contractors to review 15 percent of all claims before payment, while in 1995, contractors are only required to review 4.6 percent. Since 1993, HCFA has required contractors to use a process called focused medical review (FMR) to help them decide which claims to review. Under the FMR process, each contractor analyzes its claims to identify procedures where local use is aberrant from the national average use. Beginning in fiscal year 1995, HCFA has required each contractor to select at least 10 aberrant procedures identified through FMR and develop medical policies for those procedures. The contractors are required to work with their local physician community to define appropriate medical necessity criteria. This arrangement allows contractors to take local medical practices into consideration when establishing criteria for reviewing claims. Once physicians have had an opportunity to comment on a medical policy, the contractor publishes the final criteria. Each contractor generally decides which medical procedures to target for review and what types of corrective actions to implement to prevent payments for unnecessary services. Contractors currently concentrate on educating physicians about local medical policies, hoping to decrease the number of claims submitted that do not meet the published medical necessity criteria. Contractors also use computerized prepayment reviews, called screens, to check claims against the medical necessity criteria in medical policies. When screens identify claims that do not meet the criteria, two alternative actions are possible: first, autoadjudicated screens may deny the claim automatically; second, all other screens may suspend the claim for review by claims examiners, who may request additional documentation from the physician before deciding to pay or deny the claim. Autoadjudicated screens usually compare the diagnosis on the claim with the acceptable diagnostic conditions specified in the corresponding medical policy. For example, an autoadjudicated screen for a chest X ray would pay the claim if the patient diagnosis was pneumonia but deny the claim if the only patient diagnosis was a sprained ankle. Because this type of screen is entirely automated, it can be applied to all the claims for a specific procedure at a lesser cost than reviewing claims manually. This type of screen is most effective for denying claims that do not meet some basic set of medical necessity criteria. Claims denied by these screens can be resubmitted by providers or appealed. As shown in figure 1, claims that pass these basic criteria may be further screened against more complex medical criteria to identify claims that warrant manual review. Most of the contractors we surveyed routinely pay claims for procedures suspected to be widely overused without first screening those claims against medical necessity criteria. We looked at six groups of procedures that providers frequently perform on patients who lack medical symptoms appropriate for the procedures. These procedures also rank among the 200 most costly services in terms of total Medicare payments and accounted for almost $3 billion in Medicare payments in 1994. (See table 1 below.) Four of the procedures—echocardiography, eye examinations, chest X rays, and duplex scans of extracranial arteries—are noninvasive diagnostic tests. Colonoscopy can be either diagnostic or therapeutic, and YAG laser surgery is sometimes used to correct cloudy vision following cataract surgery. In the first quarter of fiscal year 1995 (Oct. 1-Dec. 31, 1994), we surveyed 17 contractors to determine whether they were using any type of medical necessity prepayment screens to review claims for these six groups of procedures. As shown in table 2, the use of prepayment screens among the contractors was not uniform, and for each of the six procedures fewer than half the 17 contractors were using such screens. For each group of products in our study, we found the following: Only 7 of the 17 contractors we surveyed had prepayment screens to review echocardiography for medical necessity, even though echocardiography is often performed on patients with no specific cardiovascular disorders. Ten contractors lacked such screens, even though echocardiography is the most costly diagnostic test in terms of total Medicare payments and despite an increase of over 50 percent in the use of the echocardiography procedures listed in table 1 between 1992 and 1994. Only 6 of the 17 contractors used prepayment screens to prevent payment for medically unnecessary eye examinations. These contractors have medical necessity criteria to deny claims for routine eye examinations and to allow payments only for certain conditions, such as cataracts, diabetes, and hypertension. Only 6 of the 17 contractors had prepayment screens to review chest X ray claims for medical necessity, although HCFA had alerted Medicare contractors that providers frequently bill for chest X rays that are not warranted by medical symptoms and are thus medically unnecessary. Only 6 of the 17 contractors had medical necessity prepayment screens to review colonoscopy claims. In 1991, HHS’ OIG reported that nationwide almost 8 percent of colonoscopies paid by Medicare were not indicated by diagnosis or medical documentation. Only 3 of the 17 contractors had prepayment screens for YAG laser surgery even though federal guidelines exist that indicate the diagnostic conditions for performing this surgery. Also, at a national meeting of Medicare contractors in 1994, HCFA officials discussed the need to avoid paying for unnecessary YAG laser surgery following cataract removal. Only 8 of the 17 contractors had implemented prepayment screens for duplex scans even though HCFA had alerted Medicare contractors that providers commonly bill for noninvasive vascular tests such as duplex scans without adequately documenting the patient’s medical symptoms. A primary reason all contractors do not screen claims for nationally overused procedures is that, following HCFA’s instructions for FMR, contractors have been targeting procedures that are overused locally, based on comparisons with national average use. The shortcomings of this approach are discussed later in this report. Our survey of the 17 contractors represents a snapshot of the use of prepayment screens for these procedures in the first quarter of fiscal year 1995. Typically, contractors turn screens on and off depending on their local circumstances. For example, one contractor began using a screen for echocardiography in March 1995, and another contractor implemented screens for chest X rays and eye examinations in January 1995 because these procedures were overused locally. By contrast, one contractor discontinued using an autoadjudicated screen for eye examinations in February 1995 because the diagnostic criteria for payment in the screen were considered too narrow. Nonetheless, these fluctuations in contractors’ use of screens do not reflect a coordinated approach to screening nationally overused procedures. Seven large Medicare contractors paid millions of dollars in claims for services that may have been unnecessary. These contractors did not use diagnostic medical criteria to screen claims for some of the six groups of procedures in our study. The claims paid for these services included a range of patient diagnoses that did not meet the criteria established by other contractors. For example, a chest X ray was paid for a patient with a diagnosis of injuries to the hand and wrist, an echocardiogram was paid for a patient with a diagnosis of chronic conjunctivitis, and a therapeutic colonoscopy examination was paid for a patient with a mental health diagnosis of hysteria. If the seven contractors had used autoadjudicated diagnostic screens for the six groups of procedures, they would have denied between $38 million and $200 million in claims for services in 1993, as shown in table 3. The range of estimated payments for claims that would have been denied reflects differences among contractors’ criteria for identifying medically unnecessary services. Although different contractors had screens for the same procedure, they used different diagnoses to determine medical necessity. For example, a colonoscopy screen we used from one contractor paid claims with a diagnosis of gastritis, while another contractor’s screen denied such claims. Because of these differences among the contractors’ screens, we applied screens from two or three different contractors for each group of procedures, except for YAG laser surgery. Thus, our test results show a range of estimated payments for claims that would have been denied, depending on the medical necessity criteria used. The tables in appendix II list the estimated payments for claims that would have been denied by each of the tested screens. The seven contractors we reviewed were among the largest in terms of the number of claims processed, accounting for about 37 percent of all Medicare part B claims, and almost 38 percent of all the claims for the six groups of procedures in our study. To estimate the paid claims that would have been denied, we applied autoadjudicated screens developed by several contractors in our survey to a sample of the 1993 claims paid by the seven contractors. We only applied these screens if the tested contractor did not have a medical necessity diagnostic screen of its own in place in 1993 for the specific procedure tested. We used autoadjudicated screens because decisions to pay and deny claims based on medical necessity criteria are automated and, therefore, do not require additional medical judgment. Appendix I provides additional details on our methodology. When claims are denied by prepayment screens, the billing physician can (1) resubmit the claim with additional or corrected information or (2) appeal the denial. In either case, the contractors may ultimately pay claims that they have initially denied. Contractors’ claims processing systems generally do not track the claims denied by autoadjudicated prepayment screens to determine if they are resubmitted or appealed and then paid. However, based on a limited analysis of claims denied by contractors with autoadjudicated screens, we estimate that about 25 percent of the denied claims were ultimately paid. Assuming that the 25-percent rate is typical for autoadjudicated screens, about 75 percent of the payments in table 3, or between $29 million and $150 million, were for services that would be considered unnecessary using the criteria established by various contractors. Our estimates of payments for unnecessary services involve only six groups of procedures and cannot be statistically generalized beyond the 7 contractors included in our analysis. However, all 29 contractors—not just the 7 whose claims we reviewed—operate under FMR requirements designed to correct local rather than national overutilization problems. Therefore, the other 22 contractors also may lack screens for some of these procedures and, hence, may have paid millions of dollars in claims for services that should have been denied. For widely overused procedures such as the six we tested, autoadjudicated screens can be a low-cost, efficient way to screen millions of claims against basic medical necessity criteria. Contractor officials said that these screens are much less expensive to implement than screens that suspend claims for manual review. Consequently, as funding for program safeguards declines, autoadjudicated screens can be used to maintain or even increase the number of claims reviewed. Moreover, for procedures where the medical review decisions can be automated, autoadjudicated screens can quickly identify and deny claims where the patient diagnosis is inconsistent with the procedure performed. In contrast, when claims examiners manually review claims, the risk exists that the medical necessity criteria may be misinterpreted and applied inconsistently. However, for certain procedures or medical policies, autoadjudicated screens may not be appropriate. For example, some medical policies are not easily defined with diagnostic codes and require manual review of documentation, such as medical records, to determine if a service is medically necessary. Denying claims using autoadjudicated or other prepayment screens can increase administrative costs if providers frequently resubmit denied claims or appeal the denials. Contractor officials said that these costs can be minimized if providers are educated to bill appropriately in the first place. By combining direct provider education with screens that enforce agreed upon medical criteria, contractors can, over time, reduce the number of claims submitted for unnecessary services. HCFA does not have a national strategy for using prepayment screens to deny payments for unnecessary services among Medicare’s most highly overused procedures. HCFA does periodically alert contractors about some of these procedures at semiannual national contractor meetings and through occasional bulletins. However, the agency does not identify widely overused procedures in a systematic manner. Moreover, the agency does not ensure that contractors implement prepayment screens or other corrective actions for these procedures. Medicare legislation does not preclude HCFA from requiring its contractors to screen claims for nationally overused procedures. However, HCFA has chosen to avoid the appearance of interfering in local medical practice. HCFA usually does not establish medical policies or tell the contractors which procedures warrant medical policies or prepayment screens.Instead, HCFA relies primarily on the contractors’ local FMR efforts to identify and prevent Medicare payments for unnecessary services. This process, according to HCFA officials, allows contractors to take medical practice into consideration when making medical necessity determinations. Although FMR can work well for overutilization problems that are truly local, the process is not designed to address nationwide overutilization of a medical procedure. The national average use of a procedure generally serves as a benchmark for identifying local overutilization problems, but the benchmark itself may already be inflated by millions of dollars in payments for unnecessary services. For example, in several states the use of echocardiograms greatly exceeded the 1992 national average of 101 services per 1,000 beneficiaries. Some of the contractors servicing those states have designed and implemented prepayment screens for this procedure. Meanwhile, other contractors targeted different procedures and allowed unconstrained use of echocardiograms. This focus on local overuse may be one of the factors that led to a national 12-percent increase in echocardiography use by 1994—and a new benchmark of 113 echocardiograms per 1,000 beneficiaries. HCFA can take a more active role in controlling spending for widely overused procedures without intruding on the contractors’ responsibilities to establish their own prepayment screens. HCFA has an oversight responsibility to monitor and evaluate contractors’ screens and other efforts to prevent payments for unnecessary services. Yet HCFA does not know (1) which contractors have diagnostic screens for which medical procedures, (2) the medical necessity criteria used in these screens, or (3) the effectiveness of the screens in denying claims for unnecessary services. Furthermore, without this information HCFA cannot identify best practices and promote approaches such as autoadjudicated medical necessity screens where they can be a cost-effective alternative or complement to screens that flag claims for manual review. HCFA funded a central database on local medical policies, but this resource is not being effectively used. HCFA has encouraged the contractors to use the database to research other contractors’ medical policies before drafting their own. However, according to some contractors, the usefulness of the database is limited because it is not regularly updated. Moreover, HCFA has not taken the initiative to use the database to evaluate the contractors’ medical policies and identify those worthy of consideration by all contractors for controlling widely overused procedures. HCFA can also encourage greater use of medical necessity criteria for widely overused procedures by providing contractors with more model medical policies. About 2 years ago, HCFA established clinical workgroups composed of contractor medical directors to develop model medical policies that the contractors can adapt for local use. Specifically, contractors can work with their local medical community to review model policies, adapt them to reflect local medical practice, and implement them in prepayment screens. This has been an important step in promoting greater efficiency in developing local medical policies. However, since the workgroups’ inception, only one model policy has been published.According to HCFA and contractor officials, progress has been limited in part because HCFA often takes longer to review draft model policies than its goal of 45 days. HCFA officials said that they are considering provisions for greater use of autoadjudicated screens in a new, national claims processing system. However, full implementation of that system is scheduled for late in 1999. In addition, what types of screens will be included in the system remains unclear, as well as how the contractors will chose which screens to modify, implement, and use and how HCFA will monitor and evaluate the effectiveness of the screens. Meanwhile, HCFA continues to allow contractors to pay millions of dollars for services that may be unnecessary. While the rapid increase in Medicare costs threatens the long-term viability of the Medicare program, many Medicare part B contractors continue to routinely pay claims for widely overused services, without first determining if the services are reasonable and necessary. Even when evidence indicates that problems with payments for specific medical procedures are widespread, HCFA has not ensured that contractors help correct national problems as well as local aberrancies. More specifically, HCFA policies do not encourage contractors to reduce a national norm already inflated by millions of dollars in payments for unnecessary services. Our tests of paid claims against criteria used by some of the contractors show that millions of dollars are being paid for services that do not meet basic medical necessity criteria. Although our tests were limited to seven contractors, our survey of 17 contractors indicates that nationally, additional millions of Medicare dollars may have been paid for claims that should have been denied. Prepayment screens are an important tool in preventing payments for unnecessary services. Funding for program safeguards, such as medical policies and prepayment screens, has been declining, however, while the volume of Medicare claims is increasing. In this environment, autoadjudicated diagnostic screens offer a low-cost way to ensure that all claims for selected procedures pass a basic medical necessity test before payment. Greater use of autoadjudicated screens could complement, rather than replace, the contractors’ efforts to use FMR and other types of prepayment screens to address local overutilization problems. To forestall widespread overuse of specific medical procedures, HCFA can help the contractors much more than it has. HCFA has begun to capitalize on the knowledge and skills of the contractor medical directors by using contractor workgroups to develop model medical policies. More model policies can help contractors control spending for nationally overused procedures by providing them with generally accepted criteria for identifying and denying claims for unnecessary services. However, HCFA needs to support the efforts of the workgroups and review model policies on a more timely basis so that these efforts can succeed. Also, to exercise stronger leadership by promoting best practices, HCFA needs to collect and evaluate information on the medical criteria and prepayment screens now being used by the contractors. To help prevent Medicare payments for unnecessary services, we recommend that the Secretary of HHS direct the Administrator of HCFA to systematically analyze national Medicare claims data and use analyses conducted by HHS’ OIG and Medicare contractors to identify medical procedures that are subject to overuse nationwide; gather information on all contractors’ local medical policies and prepayment screens for widely overused procedures, evaluate their cost and effectiveness, and disseminate information on model policies and effective prepayment screens to all the contractors; and hold the contractors accountable for implementing local policies, prepayment screens (including autoadjudicated screens), or other corrective actions to control payments for procedures that are highly overused nationwide. We provided HHS an opportunity to comment on our draft report, but it did not provide comments in time to be included in the final report. However, we did discuss the contents of this report with HCFA officials from the Bureau of Program Operations, including the Director of Medical Review and the Medical Officer. In general, they agreed with our findings. We obtained written comments on our draft report from several part B contractor medical directors who serve on the Contractor Medical Director Steering Committee. We selected this committee as a focal point for obtaining contractor comments because of its role as a liaison between the contractors and HCFA and the communication network for the contractor medical directors. Their comments support our conclusions (see app. III). In summary, they suggested the development of contractor workgroups to rapidly produce model medical policies for the six groups of procedures in our study. As agreed with your office, unless you release its contents earlier, we plan no further distribution of this report for 30 days. At that time, we will send copies to other congressional committees and members with an interest in this matter, the Secretary of Health and Human Services, and the Administrator of the Health Care Financing Administration. We will also make copies available to others upon request. This report was prepared by William Reis, Assistant Director; Teruni Rosengren; Stephen Licari; Michelle St. Pierre; and Vanessa Taylor under the direction of Jonathan Ratner, Associate Director. Please call me on (202) 512-7119 or Mr. Reis on (617) 565-7488 if you or your staff have any questions about this report. We reviewed HCFA’s statutory authority and responsibilities for administering the Medicare program and HCFA’s regulations and guidance to contractors on the development of local medical policies and the implementation of prepayment screens. We also discussed HCFA’s oversight of these functions with officials at its Bureau of Program Operations. Before selecting the six groups of medical procedures included in our study, we reviewed previous GAO and HHS OIG reports, HCFA guidance, and other studies on overused medical services. We also reviewed HCFA’s list of 200 medical procedure codes, ranked by total Medicare-allowed charges, and obtained Medicare contractors’ views on procedures that are likely to be overused. Based on the information gathered from these sources, we selected six groups of procedures generally considered widely overused. Because little centralized information exists on Medicare contractors’ use of prepayment screens or the medical necessity criteria included in those screens, we contacted 17 of the 29 contractors that process Medicare part B claims for physician services. We also visited three of the Medicare contractors and attended two of the semiannual contractor medical director conferences. In the course of these contacts, we decided to limit our collection of detailed information on medical necessity criteria and prepayment screens to 17 contractors who could provide us the information we needed. To estimate the Medicare payments for unnecessary services that could be prevented by broader use of prepayment screens, we tested autoadjudicated prepayment screens on claims paid by seven contractors in six states. The seven contractors in our analysis were among the largest contractors in terms of the number of claims processed in 1993 and they did not use a medical necessity prepayment screen for some of the six groups of procedures in our study. We based our tests on data from the Medicare Physician Supplier Component of the 1993 HCFA 5 Percent Sample Beneficiary Standard Analytic File. The Physician Supplier Component contains all Medicare part B claims for a random sample of beneficiaries. Our analysis is based on all paid claims in the database for the seven contractors and the six groups of procedures in our review. For each screen and tested contractor, we estimated the services and payments that would have been denied by simulating the screen using a computer algorithm to determine the number of services in the sample that would have been denied by the screen, weighing this number to reflect the universe of services, and multiplying this weighted number by the average Medicare allowance for the procedure at the contractor. The average Medicare-allowed amount for each procedure code at each contractor in 1993 was calculated based on data from HCFA’s part B Extract Summary System. For five of the procedures, we applied two or three different autoadjudicated diagnostic screens currently used by other contractors in order to illustrate the impact of using different screens. By applying multiple screens, we were able to examine the range of services that would have been denied depending on the medical necessity criteria used. For example, one of the colonoscopy screens paid claims with a diagnosis of gastritis, while another did not. For YAG laser surgery, however, we only applied the one screen that we had identified at the time we began our analysis. We only applied a particular screen to a contractor’s claims if that contractor did not have a medical necessity diagnostic screen in place in 1993 for the specific procedure being tested. We obtained our tested screens from several of the 17 contractors in our initial survey. Some of the screens we used were obtained from one of the seven contractors that we subsequently tested. Because our estimates were based on a sample of claims, our estimates are subject to sampling error. We calculated 95-percent confidence intervals for each of our estimated payments for services that would have been denied by the tested screens. This means the chances are about 19 out of 20 that the actual payments for services that would have been denied at each of the tested contractors would fall within the range covered by our estimate, plus or minus the sampling error. Sampling errors for our estimates are included in appendix II. Some of the payments that would have been denied by the tested screens would eventually be paid if they were resubmitted with corrected or additional information or successfully appealed. Because contractors’ claims processing systems generally do not track claims denied by autoadjudicated screens to determine how many are ultimately paid, we developed our own estimates. Using the 1993 HCFA 5 Percent Sample Beneficiary Standard Analytic File, we analyzed echocardiography claims processed by one contractor and duplex scan claims processed by another contractor. In each case, the contractors used autoadjudicated screens for these services. For each contractor, we used computer programs to identify claims for the services that were denied for medical necessity in a 3-month period in 1993. We then determined whether another claim was submitted and paid for the same service, provided on the same day, for the same beneficiary, and by the same provider. Our analysis showed that 23 to 25 percent of the echocardiography and duplex scan claims denied for medical necessity were subsequently paid. Based on these results we used 25 percent as our estimate of claims denied that would ultimately be paid. The actual percentage will likely vary by type of medical procedure and the diagnostic criteria used in the screen. However, because of the costs and inefficiencies associated with denying a large percentage of services and then later reprocessing and paying those services, we believe that contractors would not be likely to continue using a prepayment screen that inappropriately denies more than 25 percent of the services. The estimated number of and payments for denied services were derived from a 5-percent beneficiary sample of 1993 claims for each contractor. The estimated number of and payments for denied services were derived from a 5-percent beneficiary sample of 1993 claims for each contractor. The estimated number of and payments for denied services were derived from a 5-percent beneficiary sample of 1993 claims for each contractor. The estimated number of and payments for denied services were derived from a 5-percent beneficiary sample of 1993 claims for each contractor. The estimated number of and payments for denied services were derived from a 5-percent beneficiary sample of 1993 claims for each contractor. The estimated number of and payments for denied services were derived from a 5-percent beneficiary sample of 1993 claims for each contractor. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO provided information on Medicare payments for unnecessary medical services, focusing on the: (1) extent to which Medicare contractors employ medical necessity prepayment screens for procedures that are likely to be overused; (2) potential impact of autoadjudicated prepayment screens on Medicare spending; and (3) federal government's role in reducing overused medical procedures billed to Medicare. GAO found that: (1) Medicare spending for unnecessary medical services is widespread; (2) more than half of the 17 contractors surveyed do not use prepayment screens to check whether claimed services are necessary; (3) seven of the contractors paid between $29 and $150 million for unnecessary medical services; (4) many Medicare claims are paid because contractors' criteria for identifying unnecessary medical services vary; and (5) the Health Care Financing Administration (HCFA) needs to take a more active role in promoting local medical policies and prepayment screens for overused medical procedures.
You are an expert at summarizing long articles. Proceed to summarize the following text: This section discusses significant matters that we considered in performing our audit and in forming our conclusions. These matters include (1) six material weaknesses in IRS’ internal controls, (2) one reportable condition representing a significant weakness in IRS’ internal controls, (3) one instance of noncompliance with laws and regulations and noncompliance with the requirements of FFMIA, and (4) two other significant matters that represent important issues that should be brought to the attention of IRS management and other users of IRS’ Custodial Financial Statements and other reported financial information. During our audit of IRS’ fiscal year 1997 Custodial Financial Statements, we identified six material weaknesses that adversely affected IRS’ ability to safeguard assets from material loss, assure material compliance with relevant laws and regulations, and assure that there were no material misstatements in the financial statements. These weaknesses relate to IRS’ inadequate general ledger system, supporting subsidiary ledger for unpaid assessments, supporting documentation for unpaid assessments, controls over refunds, revenue accounting and reporting, and computer security. These material weaknesses were consistent in all significant respects with the material weaknesses cited by IRS in its fiscal year 1997 FIA report. Although we were able to apply substantive audit procedures to verify that IRS’ fiscal year 1997 Custodial Financial Statements were reliable, the six material weaknesses discussed in the following sections significantly increase the risk that future financial statements and other IRS reports may be materially misstated. The IRS’ general ledger system is not able to routinely generate reliable and timely financial information for internal and external users. The IRS’ general ledger does not capture or otherwise produce the information to be reported in the Statement of Custodial Assets and Liabilities; classify revenue receipts activity by type of tax at the detail transaction level to support IRS’ Statement of Custodial Activity and to make possible the accurate distribution of excise tax collections to the appropriate trust funds; use the standard federal accounting classification structure to produce some of the basic documents needed for the preparation of financial statements in the required formats, such as trial balances; and provide a complete audit trail for recorded transactions. As a result of these deficiencies, IRS is unable to rely on its general ledger to support its financial statements, which is a core purpose of a general ledger. These problems also prevent IRS from producing financial statements on a monthly or quarterly basis as a management tool, which is standard practice in private industry and some federal entities. The U.S. Government Standard General Ledger (SGL) establishes the general ledger account structure for federal agencies as well as the rules for agencies to follow in recording financial events. Implementation of the SGL is called for by the Core Financial System Requirements of the Joint Financial Management Improvement Program (JFMIP), and is required by the Office of Management and Budget (OMB) in its Circular A-127, Financial Management Systems. Implementation of financial management systems that comply with the SGL at the transaction level is also required by FFMIA. However, because of the problems discussed above, IRS’ general ledger does not comply with these requirements. As we previously reported, IRS’ general ledger was not designed to support financial statement preparation. To compensate for this deficiency, IRS utilizes specialized computer programs to extract information from its master files—its only detailed database of taxpayer information—to derive amounts to be reported in the financial statements. However, the amounts produced by this approach needed material audit adjustments to the Statement of Custodial Assets and Liabilities to produce reliable financial statements. Although we were able to verify that the adjusted balances were reliable as of and for the fiscal year ended September 30, 1997, this approach cannot substitute for a properly designed and implemented general ledger as a tool to account for and report financial transactions on a routine basis throughout the year. As we have reported in our previous financial audits, IRS does not have a detailed listing, or subsidiary ledger, which tracks and accumulates unpaid assessments on an ongoing basis. To compensate for the lack of a subsidiary ledger, IRS runs computer programs against its master files to identify and classify the universe of unpaid assessments. However, this approach required numerous audit adjustments to produce reliable balances. The lack of a detailed subsidiary ledger impairs IRS’ ability to effectively manage the unpaid assessments. For example, IRS’ current systems precluded it from ensuring that all parties liable for certain assessments get credit for payments made on those assessments. Specifically, payments made on unpaid payroll tax withholdings for a troubled company, which can be collectible from multiple individuals, are not always credited to each responsible party to reflect the reduction in their tax liability. In 53 of 83 cases we reviewed involving multiple individuals and companies, we found that payments were not accurately recorded to reflect the reduction in the tax liability of each responsible party. In one case we reviewed, three individuals had multimillion dollar tax liability balances, as well as liens placed against their property, even though the tax had been fully paid by the company. While we were able to determine that the amounts reported in the fiscal year 1997 financial statements pertaining to taxes receivable, a component of unpaid assessments, were reliable, this was only after significant adjustments totaling tens of billions of dollars were made. The extensive reliance IRS must place on ad hoc procedures significantly increases the risk of material misstatement of unpaid assessments and/or other reports issued by IRS in the future. A proper subsidiary ledger for unpaid assessments, as recommended by the JFMIP Core Financial Systems Requirements, is necessary to provide management with complete, up-to-date information about the unpaid assessments due from each taxpayer, so that managers will be in a position to make informed decisions about collection efforts and collectibility estimates. This requires a subsidiary ledger that makes readily available to management the amount, nature, and age of all unpaid assessments outstanding by tax liability and taxpayer, and that can be readily and routinely reconciled to corresponding general ledger balances for financial reporting purposes. Such a system should also track and make available key information necessary to assess collectibility, such as account status, payment and default history, and installment agreement terms. In our audit of IRS’ fiscal year 1996 Custodial Financial Statements, we reported that IRS could not locate sufficient supporting documentation to (1) enable us to evaluate the existence and classification of unpaid assessments or (2) support its classification of reported revenue collections and refunds paid. During our fiscal year 1997 audit, IRS was able to locate and provide sufficient supporting documentation for fiscal year 1997 revenue and refund transactions we tested. However, IRS continued to experience significant problems locating and providing supporting documentation for unpaid assessments, primarily due to the age of the items. Documentation for transactions we reviewed, such as tax returns or installment agreements, had often been destroyed in accordance with IRS record retention policies or could not be located. In addition, the documentation IRS provided did not always include useful information, such as appraisals, asset searches, and financial statements. For example, estate case files we reviewed generally did not include audited financial statements or an independent appraisal of the estate’s assets, information that would greatly assist in determining the potential collectibility and potential underreporting of these cases. Additionally, the lack of documentation made it difficult to assess the classification and collectibility of unpaid assessments reported in the financial statements as federal tax receivables. Through our audit procedures, we were able to verify the existence and proper classification of unpaid assessments and obtain reasonable assurance that reported balances were reliable. However, this required material audit adjustments to correct misstated unpaid assessment balances identified by our testing. IRS did not have sufficient preventive controls over refunds to assure that inappropriate payments for tax refunds are not disbursed. Such inappropriate payments have taken the form of refunds improperly issued or inflated, which IRS did not identify because of flawed verification procedures, or fraud by IRS employees. For example, we found three instances where refunds were paid for inappropriate amounts. This occurred because IRS does not compare tax returns to the attached W-2s (Wage and Tax Statements) at the time the returns are initially processed, and consequently did not detect a discrepancy with pertinent information on the tax return. As we have reported in prior audits, such inconsistencies generally go undetected until such time as IRS completes its document matching program, which can take as long as 18 months. In addition, during fiscal year 1997, IRS identified alleged employee embezzlement of refunds totaling over $269,000. IRS is also vulnerable to issuance of duplicate refunds made possible by gaps in IRS’ controls. IRS reported this condition as a material weakness in its fiscal year 1997 FIA report. The control weaknesses over refunds are magnified by significant levels of invalid Earned Income Credit (EIC) claims. IRS recently reported that during the period January 1995 through April 1995, an estimated $4.4 billion (25 percent) in EIC claims filed were invalid. This estimate does not reflect actual disbursements made for refunds involving EIC claims. However, it provides an indication of the magnitude of IRS’ and the federal government’s exposure to losses resulting from weak controls over refunds. While we were able to substantiate the amounts disbursed as refunds as reported on the fiscal year 1997 Custodial Financial Statements, IRS needs to have effective preventive controls in place to ensure that the federal government does not incur losses due to payment of inappropriate refunds. Once an inappropriate refund has been disbursed, IRS is compelled to expend both the time and expense to attempt to recover it, with dubious prospect of success. IRS is unable to currently determine the specific amount of revenue it actually collected for the Social Security, Hospital Insurance, Highway, and other relevant trust funds. As we previously reported, the primary reason for this weakness is that the accounting information needed to validate the taxpayer’s liability and record the payment to the proper trust fund is not provided at the time that taxpayers remit payments. Information is provided on the tax return, which can be received as late as 9 months after a payment is submitted. However, the information on the return only pertains to the amount of the tax liability, not the distribution of the amounts previously collected. As a result, IRS cannot report actual revenue collected for Social Security, Hospital Insurance, Highway, and other trust funds on a current basis nor can it accurately report revenue collected for individuals. Because of this weakness, IRS had to report Federal Insurance Contributions Act (FICA) and individual income tax collections in the same line item on its Statement of Custodial Activity for fiscal year 1997. However, requirements for the form and content of governmentwide financial statements require separate reporting of Social Security, Hospital Insurance, and individual income taxes collected. Beginning in fiscal year 1998, federal accounting standards will also require this reporting. Taxes collected by IRS on behalf of the federal government are deposited in the general revenue fund of the Department of the Treasury (Treasury), where they are subsequently distributed to the appropriate trust funds. Amounts representing Social Security and Hospital Insurance taxes are distributed to their respective trust funds based on information certified by the Social Security Administration (SSA). In contrast, for excise taxes, IRS certifies the amounts to be distributed based on taxes assessed, as reflected on the relevant tax forms. However, by law, distributions of excise taxes are to be based on taxes actually collected. The manner in which both FICA and excise taxes are distributed creates a condition in which the federal government’s general revenue fund subsidizes the Social Security, Hospital Insurance, Highway, and other trust funds. The subsidy occurs primarily because a significant number of businesses that file tax returns for Social Security, Hospital Insurance, and excise taxes ultimately go bankrupt or otherwise go out of business and never actually pay the assessed amounts. Additionally, with respect to Social Security and Hospital Insurance taxes, a significant number of self-employed individuals also do not pay the assessed amounts. While the subsidy is not necessarily significant with respect to excise taxes, it is significant for Social Security and Hospital Insurance taxes. At September 30, 1997, the estimated amount of unpaid taxes and interest in IRS’ unpaid assessments balance was approximately $44 billion for Social Security and Hospital Insurance, and approximately $1 billion for excise taxes. While these totals do not include amounts no longer in the unpaid assessments balance due to the expiration of the statutory collection period, they nevertheless give an indication of the cumulative amount of the subsidy. IRS places extensive reliance on computer systems to process tax returns, maintain taxpayer data, calculate interest and penalties, and generate refunds. Consequently, it is critical that IRS maintain adequate internal controls over these systems. We previously reported that IRS had serious weaknesses in the controls used to safeguard its computer systems, facilities, and taxpayer data. Our review of these controls as part of our audit of IRS’ fiscal year 1997 Custodial Financial Statements found that although many improvements have been made, overall controls continued to be ineffective. IRS’ controls over automated systems continued to exhibit serious weaknesses in (1) physical security, (2) logical security, (3) data communications management, (4) risk analysis, (5) quality assurance, (6) internal audit and security, and (7) contingency planning. Weaknesses in these areas can allow unauthorized individuals access to critical hardware and software where they may intentionally or inadvertently add, alter, or delete sensitive data or programs. IRS recognized these weaknesses in its fiscal year 1997 FIA report and has corrected a significant number of the computer security weaknesses identified in our previous reports. Additionally, IRS has centralized responsibility for security and privacy issues and added staff in this area. IRS is implementing plans to mitigate the remaining weaknesses by June 1999. In our fiscal year 1997 audit, we were able to verify the accuracy of the financial statement balances and disclosures originating in whole or in part from automated systems primarily through review and testing of supporting documentation. However, the absence of effective internal controls over IRS’ automated systems makes IRS vulnerable to losses, delays or interruptions in service, and compromising of the sensitive information entrusted to IRS by taxpayers. In addition to the material weaknesses discussed above, we identified one reportable condition that although not a material weakness, represents a significant deficiency in the design or operation of internal controls and could adversely affect IRS’ ability to meet the internal control objectives described in this report. This condition concerns weaknesses in IRS’ controls over its manually processed tax receipts. IRS’ controls over the receipt of cash and checks it manually receives from taxpayers are not adequate to assure that these payments will be properly credited to taxpayer accounts and deposited in the Treasury. To ensure that appropriate security over these receipts is maintained, IRS requires that lock box depositories receiving payments on its behalf use a surveillance camera to monitor staff when they open mail containing cash and checks. However, we found that payments received at the four IRS service centers where we tested controls over manual cash receipts were not subject to comparable controls. We found at these locations that (1) IRS allowed individuals to open mail unobserved, and relied on them to accurately report amounts received, and (2) payments received were not logged or otherwise recorded at the point of receipt to immediately establish accountability and thereby deter and detect diversion. In addition, at one service center, we observed payments being received by personnel who should not have been authorized to accept receipts. As a result of these weaknesses, IRS is vulnerable to losses of cash and checks received from taxpayers in payment of taxes due. In fact, between 1995 and 1997, IRS identified instances of actual or alleged employee embezzlement of receipts totaling about $4.6 million. These actual and alleged embezzlements underscore the need for effective internal controls over the IRS’ service center receipts process. Our tests of compliance with selected provisions of laws and regulations disclosed one instance of noncompliance that is reportable under generally accepted government auditing standards and OMB Bulletin 93-06 Audit Requirements for Federal Financial Statements. This concerns IRS’ noncompliance with a provision of the Internal Revenue Code concerning certification of excise taxes. We also noted that IRS’ financial management systems do not substantially comply with the requirements of FFMIA, which is reportable under OMB Bulletin 98-04. IRS policies and procedures for certification to Treasury of the distribution of the excise tax collections to the designated trust funds do not comply with the Internal Revenue Code. The Code requires IRS to certify the distribution of these excise tax collections to the recipient trust funds based on actual collections. However, as we have reported previously,and as discussed earlier in this report, IRS based its certifications of excise tax amounts to be distributed to specific trust funds on the assessed amount, or amount owed, as reflected on the tax returns filed by taxpayers. IRS has studied various options to enable it to make final certifications of amounts to be distributed based on actual collections and to develop the underlying information needed to support such certifications. IRS was in the process of finalizing its proposed solution at the conclusion of our fiscal year 1996 audit; however, through the end of our fiscal year 1997 audit, IRS still had not implemented its proposed solution. For example, in December 1997, IRS certified the third quarter of fiscal year 1997 based on assessments rather than collections. As the auditor of IRS’ Custodial Financial Statements, we are reporting under FFMIA on whether IRS’ financial management systems substantially comply with the Federal Financial Management System Requirements (FFMSR), applicable federal accounting standards, and the SGL at the transaction level. As indicated by the material weaknesses we discussed earlier, IRS’ systems do not substantially comply with these requirements. For example, as noted previously, IRS does not have a general ledger that conforms with the SGL. Additionally, IRS lacks a subsidiary ledger for its unpaid assessments, and lacks an effective audit trail from its general ledger back to transaction source documents. These are all requirements under FFMSR. The other three material weaknesses we discussed above—controls over refunds, revenue accounting and reporting, and computer security—also are conditions indicating that IRS’ systems do not comply with FFMSR. In addition, the material weaknesses we noted above mean that IRS’ systems cannot produce reliable financial statements and related disclosures that conform with applicable federal accounting standards. Since IRS’ systems do not comply with FFMSR, applicable federal accounting standards, and the SGL, they also do not comply with OMB Circular A-127, Financial Management Systems. We have previously reported on many of these issues and made recommendations for corrective actions. IRS has drafted a plan of action intended to incrementally improve its financial reporting capabilities, which is scheduled to be fully implemented during fiscal year 1999. This plan is intended to bring IRS’ general ledger into conformance with the SGL and would be a step toward compliance with FFMSR. However, the plan falls short of fully meeting FFMSR requirements. For example, the plan will not provide for (1) full traceability of information through its systems (i.e., lack of an audit trail), (2) a subsidiary ledger to assist in distinguishing federal tax receivables from other unpaid assessments, and (3) reporting of revenue by tax type. As discussed later in this report, the latter example has implications for IRS’ ability to meet certain federal accounting standards required to be implemented in fiscal year 1998. IRS also has a longer-range plan to address the financial management system deficiencies noted in prior audits and in IRS’ own self-assessment. During future audits, we will monitor IRS’ implementation of these initiatives, and assess their effectiveness in resolving the material weaknesses discussed in this report. In addition to the material weaknesses and other reportable conditions and noncompliance with laws and regulations and FFMIA requirements discussed in the previous sections, we identified two other significant matters that we believe should be brought to the attention of IRS management and other users of IRS’ financial statements and other financial reports. These concern (1) the composition and collectibility of IRS’ unpaid assessments and (2) the importance of IRS successfully preparing its automated systems for the year 2000. As reflected in the supplemental information to IRS’ fiscal year 1997 Custodial Financial Statements, the unpaid assessments balance was about $214 billion as of September 30, 1997. This unpaid assessments balance has historically been referred to as IRS’ taxes receivable or accounts receivable. However, a significant portion of this balance is not considered a receivable. Also, a substantial portion of the amounts considered receivables is largely uncollectible. Under federal accounting standards, unpaid assessments require taxpayer or court agreement to be considered federal taxes receivable. Assessments not agreed to by taxpayers or the courts are considered compliance assessments and are not considered federal taxes receivable. Assessments with little or no future collection potential are called write-offs. Figure 1 depicts the components of the unpaid assessments balance as of September 30, 1997. Taxes Receivable - Uncollectible ($62) Compliance Assessments ($48) Of the $214 billion balance of unpaid assessments, $76 billion represents write-offs. Write-offs principally consist of amounts owed by bankrupt or defunct businesses, including many failed financial institutions resolved by the Federal Deposit Insurance Corporation (FDIC) and the former Resolution Trust Corporation (RTC). As noted above, write-offs have little or no future collection potential. In addition, $48 billion of the unpaid assessments balance represents amounts that have not been agreed to by either the taxpayer or a court. Due to the lack of agreement, these compliance assessments are likely to have less potential for future collection than those unpaid assessments that are considered federal taxes receivable. The remaining $90 billion of unpaid assessments represent federal taxes receivable. About $62 billion (70 percent) of this balance is estimated to be uncollectible due primarily to the taxpayer’s economic situation, such as individual taxpayers who are unemployed or have other financial problems. However, IRS may continue collection action for 10 years after the assessment or longer under certain conditions. Thus these accounts may still ultimately have some collection potential if the taxpayer’s economic condition improves. About $28 billion, or about 30 percent, of federal taxes receivable is estimated to be collectible. Components of the collectible balance include installment agreements with estates and individuals, as well as relatively newer amounts due from individuals and businesses who have a history of compliance. It is also important to note that of the unpaid assessments balance, about $136 billion (over 60 percent) represents interest and penalties, as depicted in figure 2, which are largely uncollectible. Interest and Penalties ($136) Interest and penalties are such a high percentage of the balance because IRS continues to accrue them through the 10-year statutory collection date, regardless of whether an account meets the criteria for financial statement recognition or has any collection potential. For example, interest and penalties continue to accrue on write-offs, such as FDIC and RTC cases, as well as on exam assessments where the taxpayers have not agreed to the validity of the assessments. The overall growth in unpaid assessments during fiscal year 1997 was wholly attributable to the accrual of interest and penalties. It is critical that IRS successfully prepare its automated systems in order to overcome the potential problems associated with the year 2000. The Year 2000 problem is rooted in the way dates are recorded and calculated in many computer systems. For the past several decades, systems have typically used two digits to represent the year in order to conserve on electronic data storage and reduce operating costs. With this two-digit format, however, the year 2000 is indistinguishable from the year 1900. As a result, system or application programs that use dates to perform calculations, comparisons, or sorting may generate incorrect results when working with years after 1999. IRS has underway one of the largest conversion efforts in the civilian sector. IRS has established a schedule to renovate its automated systems in five segments, with all renovation efforts scheduled for completion by January 1999 in order to allow a full year of operational testing. However, with less than 2 years remaining until the year 2000 arrives, the task of completing the conversion on time is formidable. If IRS is unable to make its automated systems Year 2000 compliant, IRS could be rendered unable to properly process tax returns, issue refunds, correctly calculate interest and penalties, effectively collect taxes, or prepare accurate financial statements and other financial reports. We are working with the Congress and the executive branch to monitor progress made by federal agencies and identify specific recommendations for resolving the Year 2000 problem, which we reported as a governmentwide high risk area and which the President has designated as a priority management objective. In addition to the weaknesses discussed above, we noted other, less significant matters involving IRS’ system of accounting controls and its operations which we will be reporting separately to IRS. The Custodial Financial Statements, including the accompanying notes, present fairly, in all material respects, and in conformity with a comprehensive basis of accounting other than generally accepted accounting principles, as described in note 1, IRS’ custodial assets and liabilities and custodial activity. Although the weaknesses described above precluded IRS’ internal controls from achieving the internal control objectives discussed previously, we were nevertheless able to obtain reasonable assurance that the Custodial Financial Statements were reliable through the use of substantive audit procedures. However, misstatements may nevertheless occur in other financial information reported by IRS as a result of the internal control weaknesses described above. As discussed in the notes to the fiscal year 1997 Custodial Financial Statements, IRS has attempted, to the extent practical, to implement early the provisions of Statement of Federal Financial Accounting Standards (SFFAS) No. 7, Accounting for Revenue and Other Financing Sources and Concepts for Reconciling Budgetary and Financial Accounting. SFFAS No. 7 is not effective until fiscal year 1998. However, the requirement that this standard be fully implemented in fiscal year 1998 has significant implications for IRS and its fiscal year 1998 Custodial Financial Statements. The significant internal control and system weaknesses discussed earlier may affect IRS’ ability to implement this standard until corrective actions have fully resolved these weaknesses. For example, as discussed earlier, IRS currently does not capture information at the time of receipt of payments from the taxpayer on how such payments are to be applied to the various trust funds. Consequently, IRS is presently unable to report collections of tax revenue by specific tax type as envisioned in SFFAS No. 7 and OMB’s Format and Instructions for the Form and Content of the Financial Statements of the U.S. Government (September 2, 1997). Other provisions of SFFAS No. 7 will also be difficult for IRS to implement in the short term until the significant internal control and systems issues reported in prior audits and discussed above are resolved. We evaluated IRS management’s assertion about the effectiveness of its internal controls designed to safeguard assets against loss from unauthorized acquisition, use, or assure the execution of transactions in accordance with laws governing the use of budget authority and other laws and regulations that have a direct and material effect on the Custodial Financial Statements or are listed in OMB audit guidance and could have a material effect on the Custodial Financial Statements; and properly record, process, and summarize transactions to permit the preparation of reliable financial statements and to maintain accountability for assets. IRS management asserted that except for the material weaknesses in internal controls presented in the agency’s fiscal year 1997 FIA report on compliance with the internal control and accounting standards, internal controls provided reasonable assurance that the above internal control objectives were satisfied during fiscal year 1997. Management made this assertion based upon criteria established under FIA and OMB Circular A-123, Management Accountability and Control. Our internal control work would not necessarily disclose material weaknesses not reported by IRS. However, we believe that IRS’ internal controls, taken as a whole, were not effective in satisfying the control objectives discussed above during fiscal year 1997 because of the severity of the material weaknesses in internal controls described in this report, which were also cited by IRS in its fiscal year 1997 FIA report. Except as noted above, our tests of compliance with selected provisions of laws and regulations disclosed no other instances of noncompliance which we consider to be reportable under generally accepted government auditing standards or OMB Bulletin 93-06. Under FFMIA and OMB Bulletin 98-04, our tests disclosed, as discussed above, that IRS’ financial management systems do not substantially comply with the requirements for the following: federal financial management systems, applicable federal accounting standards, and the U.S. Government Standard General Ledger at the transaction level. However, the objective of our audit was not to provide an opinion on overall compliance with laws, regulations, and FFMIA requirements tested. Accordingly, we do not express such an opinion. IRS’ overview and supplemental information contain various data, some of which are not directly related to the Custodial Financial Statements. We do not express an overall opinion on this information. However, we compared this information for consistency with the Custodial Financial Statements and, based on our limited work, found no material inconsistencies. preparing the annual Custodial Financial Statements in conformity with the basis of accounting described in note 1; establishing, maintaining, and assessing internal controls to provide reasonable assurance that the broad control objectives of FIA are met; and complying with applicable laws and regulations and FFMIA requirements. We are responsible for obtaining reasonable assurance about whether (1) the Custodial Financial Statements are reliable (free of material misstatements and presented fairly, in all material respects, in conformity with the basis of accounting described in note 1), and (2) management’s assertion about the effectiveness of internal controls is fairly stated, in all material respects, based upon criteria established under the Federal Managers’ Financial Integrity Act of 1982 and OMB Circular A-123, Management Accountability and Control. We are also responsible for testing compliance with selected provisions of laws and regulations, for reporting on compliance with FFMIA requirements, and for performing limited procedures with respect to certain other information appearing in these annual Custodial Financial Statements. In order to fulfill these responsibilities, we examined, on a test basis, evidence supporting the amounts and disclosures in the Custodial Financial Statements; assessed the accounting principles used and significant estimates made by management in the preparation of the Custodial Financial Statements; evaluated the overall presentation of the Custodial Financial Statements; obtained an understanding of internal controls related to safeguarding assets, compliance with laws and regulations, including execution of transactions in accordance with budget authority and financial reporting; tested relevant internal controls over safeguarding, compliance, and financial reporting and evaluated management’s assertion about the effectiveness of internal controls; tested compliance with selected provisions of the following laws and regulations: Internal Revenue Code (appendix I), Debt Collection Act, as amended {31 U.S.C. § 3720A}, Government Management Reform Act of 1994 {31 U.S.C. § 3515, 3521 (e)-(f)}, and Federal Managers’ Financial Integrity Act of 1982 {31 U.S.C. § 3512(d)}; tested whether IRS’ financial management systems substantially comply with the requirements of the Federal Financial Management Improvement Act of 1996, including Federal Financial Management Systems Requirements, applicable federal accounting standards, and the U.S. Government Standard General Ledger at the transaction level. We did not evaluate all internal controls relevant to operating objectives as broadly defined by FIA, such as those controls relevant to preparing statistical reports and ensuring efficient operations. We limited our internal control testing to those controls necessary to achieve the objectives outlined in our opinion on management’s assertion about the effectiveness of internal controls. As the auditor of IRS’ Custodial Financial Statements, we are reporting under FFMIA on whether the agency’s financial management systems substantially comply with the Federal Financial Management Systems Requirements, applicable federal accounting standards, and the U.S. Government Standard General Ledger at the transaction level. In making this report, we considered the implementation guidance for FFMIA issued by OMB on September 9, 1997. The IRS’ Custodial Financial Statements do not reflect the potential impact of any excess of taxes due in accordance with the Internal Revenue Code, over taxes actually assessed by IRS, often referred to as the “tax gap.” SFFAS No. 7 specifically excludes the “tax gap” from financial statement reporting requirements. Consequently, the Custodial Financial Statements do not consider the impact of the tax gap. We performed our work in accordance with generally accepted government auditing standards and OMB Bulletin 93-06. In commenting on a draft of this report, IRS stated that it generally agreed with the findings and conclusions in the report. IRS acknowledged the internal control weaknesses and noncompliance with laws and regulations we cited, and discussed initiatives underway to address many of the issues raised in the report. We will evaluate the effectiveness of IRS’ corrective actions as part of our audit of IRS’ fiscal year 1998 Custodial Financial Statements. However, we do not agree with IRS’ assertion that it needs a change in legislation to obtain information from taxpayers at the time of remittance to properly allocate excise tax payments to the various trust funds. We recognize that resolution of many of these issues could take several years. IRS agreed with our conclusion that its financial management systems do not comply with the Federal Financial Management Systems Requirements and the U.S. Government Standard General Ledger requirements of the Federal Financial Management Improvement Act of 1996. However, IRS believes that its current accounting and financial reporting process complies with applicable federal accounting standards. OMB’s September 9, 1997, memorandum on implementation guidance for FFMIA specifies two indicators that must be present to indicate compliance with federal accounting standards. First, the agency generally should receive an unqualified opinion on its financial statements. Second, there should be no material weaknesses in internal controls that affect the agency’s ability to prepare auditable financial statements and related disclosures. As we reported, IRS received an unqualified opinion on its financial statements. However, as discussed in this report, we identified six material weaknesses in IRS’ internal controls. As a result of these weaknesses, IRS’ financial management systems are unable to produce reliable financial statements and related disclosures without extensive ad hoc procedures and tens of billions of dollars in adjustments. Consequently, IRS’ financial management systems are not in compliance with applicable federal accounting standards requirements. IRS’ written comments are included in appendix II. Thomas Armstrong, Assistant General Counsel Andrea Levine, Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO examined the Internal Revenue Service's (IRS) custodial financial statements for fiscal year (FY) ending September 30, 1997. GAO noted that: (1) the IRS custodial financial statements were reliable in all material respects; (2) IRS management's assertion about the effectiveness of internal controls stated that except for the material weaknesses in internal controls presented in the agency's FY 1997 Federal Managers' Financial Integrity Act (FIA) report, internal controls were effective in satisfying the following objectives: (a) safeguarding assets from material loss; (b) assuring material compliance with laws governing the use of budget authority and with other relevant laws and regulations; and (c) assuring that there were no other material misstatements in the custodial financial statements; (3) however, GAO found that IRS' internal controls, taken as a whole, were not effective in satisfying these objectives; (5) due to the severity of the material weaknesses in IRS' financial accounting and reporting controls, all of which were reported in IRS' FY 1997 FIA report, extensive reliance on ad hoc programming and analysis was needed to develop financial statement line item balances, and the resulting amounts needed material audit adjustments to produce reliable custodial financial statements; and (6) one reportable noncompliance with selected provisions of laws and regulations GAO tested, and that IRS' financial management systems do not substantially comply with the requirements of the Federal Financial Management Improvement Act of 1996.
You are an expert at summarizing long articles. Proceed to summarize the following text: The primary mission of the Federal Aviation Administration (FAA) is to provide a safe, secure, and efficient global aerospace system that contributes to national security and the promotion of U.S. aerospace safety. FAA’s ability to fulfill this mission depends on the adequacy and reliability of the nation’s air traffic control (ATC) systems—a vast network of computer hardware, software, and communications equipment. To accommodate forecasted growth in air traffic and to relieve the problems of aging ATC systems, FAA embarked on an ambitious ATC modernization program in 1981. FAA now estimates that it will spend about $51 billion to replace and modernize ATC systems through 2007. Our work over the years has chronicled many FAA problems in meeting ATC projects’ cost, schedule, and performance goals. As a result of these issues as well as the tremendous cost, complexity, and mission criticality of the modernization program, we designated the program as a high-risk information technology initiative in 1995, and it has remained on our high- risk list since that time. Automated information processing and display, communication, navigation, surveillance, and weather resources permit air traffic controllers to view key information—such as aircraft location, aircraft flight plans, and prevailing weather conditions—and to communicate with pilots. These resources reside at, or are associated with, several ATC facilities—ATC towers, terminal radar approach control facilities, air route traffic control centers (en route centers), flight service stations, and the ATC System Command Center. Figure 2 shows a visual summary of ATC over the continental United States and oceans. Faced with growing air traffic and aging equipment, in 1981, FAA initiated an ambitious effort to modernize its ATC system. This effort involves the acquisition of new surveillance, data processing, navigation, and communications equipment, in addition to new facilities and support equipment. Initially, FAA estimated that its ATC modernization effort would cost $12 billion and could be completed over 10 years. Now, 2 decades and $35 billion later, FAA expects to need another $16 billion through 2007 to complete key projects, for a total cost of $51 billion. Over the past 2 decades, many of the projects that make up the modernization program have experienced substantial cost overruns, schedule delays, and significant performance shortfalls. Our work over the years has documented many of these shortfalls. As a result of these problems, as well as the tremendous cost, complexity, and mission criticality of the modernization program, we designated the program as a high-risk information technology initiative in 1995, and it has remained on our high-risk list since that time. Our work since the mid-1990s has pinpointed root causes of the modernization program’s problems, including (1) immature software acquisition capabilities, (2) lack of a complete and enforced system architecture, (3) inadequate cost estimating and cost accounting practices, (4) an ineffective investment management process, and (5) an organizational culture that impaired the acquisition process. We have made over 30 recommendations to address these issues, and FAA has made substantial progress in addressing them. Nonetheless, in our most recent high-risk report, we noted that more remains to be done—and with FAA still expecting to spend billions on new ATC systems, these actions are as critical as ever. In March 1997, we reported that FAA’s processes for acquiring software, the most costly and complex component of its ATC systems, were ad hoc, sometimes chaotic, and not repeatable across projects. We also reported that the agency lacked an effective management structure for ensuring software process improvement. As a result, the agency was at great risk of not delivering promised software capabilities on time and within budget. We recommended that FAA establish a Chief Information Officer organizational structure, as prescribed in the Clinger-Cohen Act, and assign responsibility for software acquisition process improvement to this organization. We also recommended several actions intended to help FAA improve its software acquisition capabilities by institutionalizing mature processes. These included developing a comprehensive plan for process improvement, allocating adequate resources to ensure that improvement efforts were implemented, and requiring that projects achieve a minimum level of maturity before being approved. FAA has implemented most of our recommendations. The agency established a Chief Information Officer position that reports directly to the administrator and gave this position responsibility for process improvement. The Chief Information Officer’s process improvement office developed a strategy and led the way in developing an integrated framework for improving maturity in system acquisition, development, and engineering processes. Some of the business organizations within FAA, including the organizations responsible for ATC acquisitions and operations, adopted the framework and provided resources to process improvement efforts. FAA did not, however, implement our recommendation to require that projects achieve a minimum level of maturity before being approved. Officials reported that rather than establish arbitrary thresholds for maturity, FAA intended to evaluate process areas that were most critical or at greatest risk for each project during acquisition management reviews. Recent legislation and an executive order have led to major changes in the way that FAA manages its ATC mission. In April 2000, the Wendell H. Ford Aviation Investment and Reform Act for the 21st Century (Air-21) established the position of Chief Operating Officer for the ATC system. In December 2000, executive order 13180 instructed FAA to establish a performance-based organization known as the Air Traffic Organization and to have the Chief Operating Officer lead this organization under the authority of the FAA administrator. This order, amended in June 2002, called for the Air Traffic Organization to enhance the FAA’s primary mission of ensuring the safety, security, and efficiency of the National Airspace System and further improve the delivery of air traffic services to the American public by reorganizing air traffic services and related offices into a performance-based, results-oriented organization. The order noted that as a performance-based organization, the Air Traffic Organization would be able to take better advantage of the unique procurement and personnel authorities currently used by FAA, as well as of the additional management reforms enacted by Congress under Air-21. In addition, the Air Traffic Organization is responsible for developing methods to accelerate ATC modernization, improving aviation safety related to ATC, and establishing strong incentives to agency managers for achieving results. In leading the new Air Traffic Organization, the Chief Operating Officer’s responsibilities include establishing and maintaining organizational and individual goals, a 5-year strategic plan including ATC system mission and objectives, and a framework agreement with the Administrator to establish the new organization’s relationships with other FAA organizations. In August 2003, the first Chief Operating Officer joined the agency and initiated a reorganization combining the separate ATC-related organizations and offices into the Air Traffic Organization. An essential aspect of FAA’s ATC modernization program is the quality of the software and systems involved, which is heavily influenced by the quality and maturity of the processes used to acquire, develop, manage, and maintain them. Carnegie Mellon University’s Software Engineering Institute (SEI), recognized for its expertise in software and system processes, has developed the Capability Maturity Model Integration (CMMI) and a CMMI appraisal methodology to evaluate, improve, and manage system and software development and engineering processes. The CMMI model and appraisal methodology provide a logical framework for measuring and improving key processes needed for achieving high-quality software and systems. The model can help an organization set process improvement objectives and priorities and improve processes; the model can also provide guidance for ensuring stable, capable, and mature processes. According to SEI, organizations that implement such process improvements can achieve better project cost and schedule performance and higher quality products. In brief, the CMMI model identifies 25 process areas—clusters of related practices that, when performed collectively, satisfy a set of goals that are considered important for making significant improvements in that area. Table 1 describes these process areas. The CMMI model provides two alternative ways to view these process areas. One way, called continuous representation, focuses on improving capabilities in individual process areas. The second way, called staged representation, groups process areas together and focuses on achieving increased maturity levels by improving the group of process areas. The CMMI appraisal methodology calls for assessing process areas by determining whether the key practices are implemented and whether the overarching goals are satisfied. Under continuous representation, successful implementation of these practices and satisfaction of these goals result in the achievement of successive capability levels in a selected process area. CMMI capability levels range from 0 to 5, with level 0 meaning that the process is either not performed or partially performed; level 1 meaning that the basic process is performed; level 2 meaning that the process is managed; level 3 meaning that the processes is defined throughout the organization; level 4 meaning that the process is quantitatively managed; and level 5 meaning that the process is optimized. Figure 3 provides details on CMMI capability levels. The Chairman, House Committee on Government Reform, and the Chairman of that Committee’s Subcommittee on Technology, Information Policy, Intergovernmental Relations and the Census requested that we evaluate FAA’s software and system development processes used to manage its ATC modernization. Our objectives were (1) to evaluate FAA’s capabilities for developing and acquiring software and systems on its ATC modernization program and (2) to assess the actions FAA has under way to improve these capabilities. To evaluate FAA’s capabilities for developing and acquiring software and systems, we applied the CMMI model (continuous representation) and its related appraisal methodology to four FAA projects. Our appraisers were all SEI-trained software and information systems specialists. In addition, we employed SEI-trained consultants as advisors on our first evaluation to ensure proper application of the model and appraisal methodology. In consultation with FAA officials, we selected four FAA projects with high impact, visibility, and cost, which represented different air traffic domains and reflected different stages of life cycle development. The projects included the Voice Switching and Control System (VSCS), the Integrated Terminal Weather System (ITWS), the En Route Automation Modernization (ERAM) project, and the Airport Surface Detection Equipment–Model X (ASDE-X). The four projects are described in table 2. In conjunction with FAA’s process improvement organization, we identified relevant CMMI process areas for each appraisal. In addition, because system deployment is an important aspect of FAA systems management that is not included in CMMI, we used the deployment, transition, and disposal process area from FAA’s integrated Capability Maturity Model, version 2. For consistency, we merged FAA’s criteria with SEI’s framework and added the standard goals and practices needed to achieve capability level 2. In selected cases, we did not review a certain process area because it was not relevant to the current stage of a project’s life cycle. For example, we did not evaluate supplier agreement management or deployment on VSCS because the system is currently in operation, and these process areas are no longer applicable to this system. Table 3 displays the CMMI process areas that we reviewed for each project. For each process area reviewed, we evaluated project-specific documentation and interviewed project officials to determine whether key practices were implemented and goals were achieved. In accordance with CMMI guidance, we characterized practices as fully implemented, largely implemented, partially implemented, and not implemented, and characterized goals as satisfied or unsatisfied. After combining the practices and goals, the team determined if successive capability levels were achieved. According to the CMMI appraisal method, practices must be largely or fully implemented in order for a goal to be satisfied. Further, all goals must be satisfied in order to achieve a capability level. In order to achieve advanced capability levels, all preceding capability levels must be achieved. For example, a prerequisite for level 2 is the achievement of level 1. As agreed with FAA process improvement officials, we evaluated the projects through capability level 2. Consistent with the CMMI appraisal methodology, we validated our findings by sharing preliminary observations with the project team so that they were able to provide additional documentation or information as warranted. To assess the actions FAA has under way to improve its system and software acquisition and development processes, we evaluated process improvement strategies and plans. We also evaluated the progress the agency has made in expanding its process improvement initiative, both through the maturity of the model and the acceptance of the model by project teams. We also interviewed officials from the offices of the Chief Information Officer and the Chief Operating Officer to determine the effect current changes in the ATC organization could have on the process improvement initiatives. The Department of Transportation and FAA provided oral comments on a draft of this report. These comments are presented in chapter 17. We performed our work from September 2003 through July 2004 in accordance with generally accepted government auditing standards. The purpose of project planning is to establish and maintain plans that define the project activities. This process area involves developing and maintaining a plan, interacting with stakeholders, and obtaining commitment to the plan. As figure 4 shows, three of the four FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. The fourth project would have achieved level 1 if it had performed one more practice (see the overview in table 4 for details). None of the four projects satisfied all criteria for the “managing” capability level (level 2). While all four projects had differing weaknesses that contributed to this result, common weaknesses across most of the projects occurred in the areas of monitoring and controlling the project planning process and in ensuring quality assurance of the process. As a result of these weaknesses, FAA is exposed to increased risks that projects will not meet cost, schedule, or performance goals and that projects will not meet mission needs. Looked at another way, of the 96 practices we evaluated in this process area, FAA projects had 88 practices that were fully or largely implemented and 8 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 5 through 12. Specifically, tables 5 and 6 provide results for VSCS; tables 7 and 8 provide results for ERAM; tables 9 and 10 provide results for ITWS; and tables 11 and 12 provide results for ASDE-X. The purpose of project monitoring and control is to provide an understanding of the project’s progress so that appropriate corrective actions can be taken when the project’s performance deviates significantly from the plan. Key activities include monitoring activities, communicating status, taking corrective action, and determining progress. As shown in figure 5, three of the four FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. The fourth project would have achieved level 1 if it had performed one more practice (see the overview in table 13 for details). None of the four projects satisfied all criteria for the “managing” capability level (level 2). While the projects had differing weaknesses that contributed to this result, a common weakness across most of the projects occurred in the area of ensuring quality assurance of the process. As a result of this weakness, FAA is exposed to increased risks that projects will not meet cost, schedule, or performance goals and that projects will not meet mission needs. Looked at another way, of the 80 practices we evaluated in this process area, FAA projects had 74 practices that were fully or largely implemented and 6 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 14 through 21. Specifically, tables 14 and 15 provide results for VSCS; tables 16 and 17 provide results for ERAM; tables 18 and 19 provide results for ITWS; and tables 20 and 21 provide results for ASDE-X. The purpose of risk management is to identify potential problems before they occur, so that risk-handling activities may be planned and invoked as needed across the life of the product or project to mitigate adverse impacts on achieving objectives. Effective risk management includes early and aggressive identification of risks through the involvement of relevant stakeholders. Early and aggressive detection of risk is important, because it is typically easier, less costly, and less disruptive to make changes and correct work efforts during the earlier phases of the project. As shown in figure 6, three of the four FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. The fourth project would have achieved level 1 if it had performed one more practice (see the overview in table 22 for details). Two of the four FAA projects also satisfied all criteria for the “managed” capability level (level 2) in this process area. While the other projects had differing weaknesses that contributed to this result, common weaknesses across some of the projects occurred in the area of monitoring and controlling the risk management process and in ensuring quality assurance of the process. As a result of these weaknesses, FAA faces increased likelihood that project risks will not be identified and addressed in a timely manner—thereby increasing the likelihood that projects will not meet cost, schedule, or performance goals. Looked at another way, of the 68 practices we evaluated in this key process area, FAA projects had 59 practices that were fully or largely implemented and 9 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 23 through 30. Specifically, tables 23 and 24 provide results for VSCS; tables 25 and 26 provide results for ERAM; tables 27 and 28 provide results for ITWS; and tables 29 and 30 provide results for ASDE-X. The purpose of requirements development is to produce and analyze customer, product, and product-component needs. This process area addresses the needs of relevant stakeholders, including those pertinent to various product life-cycle phases. It also addresses constraints caused by the selection of design solutions. The development of requirements includes elicitation, analysis, validation, and communication of customer and stakeholder needs and expectations. As shown in figure 7, all four FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. None of the four projects satisfied all criteria for the “managing” capability level (level 2). While all four projects had differing weaknesses that contributed to this result, common weaknesses across multiple projects occurred in the areas of training people and in ensuring quality assurance of the requirements development process, as shown in the overview in table 31. As a result of these weaknesses, FAA is exposed to increased risks that projects will not fulfill mission and user needs. Looked at another way, of the 84 practices we evaluated in this key process area, FAA projects had 77 practices that were fully or largely implemented and 7 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 32 through 39. Specifically, tables 32 and 33 provide results for VSCS; tables 34 and 35 provide results for ERAM; tables 36 and 37 provide results for ITWS; and tables 38 and 39 provide results for ASDE-X. The purpose of requirements management is to manage the project’s product components and to identify inconsistencies between requirements and the project’s plans and work products. This process area includes managing all technical and nontechnical requirements and any changes to these requirements as they evolve. As shown in figure 8, all four FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area, but none satisfied all criteria for achieving a “managed” capability level (level 2). While the projects had differing weaknesses that contributed to this result, a common weakness across most of the projects occurred in the area of ensuring quality assurance of the requirements management process, as shown in the overview in table 40. As a result of these weaknesses, FAA is exposed to increased risks that projects will not fulfill mission and user needs. Looked at another way, of the 60 practices we evaluated in this key process area, FAA projects had 54 practices that were fully or largely implemented and 6 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 41 through 48. Specifically, tables 41 and 42 provide results for VSCS; tables 43 and 44 provide results for ERAM; tables 45 and 46 provide results for ITWS; and tables 47 and 48 provide results for ASDE-X. The purpose of the technical solution process area is to design, develop, and implement products, product components, and product-related life- cycle processes to meet requirements. This process involves evaluating and selecting solutions that potentially satisfy an appropriate set of allocated requirements, developing detailed designs, and implementing the design. As shown in figure 9, three FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. The fourth project would have achieved level 1 if it had performed two more practices (see the overview in table 49 for details). None of the four projects satisfied all criteria for the “managing” capability level (level 2). While all four projects had differing weaknesses that contributed to this result, common weaknesses across most of the projects occurred in the area of ensuring quality assurance of the technical solution process. As a result of this weakness, FAA is exposed to increased risks that projects will not meet mission needs. Looked at another way, of the 72 practices we evaluated in this key process area, FAA projects had 62 practices that were fully or largely implemented and 10 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 50 through 57. Specifically, tables 50 and 51 provide results for VSCS; tables 52 and 53 provide results for ERAM; tables 54 and 55 provide results for ITWS; and tables 56 and 57 provide results for ASDE-X. The purpose of the product integration process is to assemble the product components, ensure that the integrated product functions properly, and deliver the product. A critical aspect of this process is managing the internal and external interfaces of the products and product components, in one stage or in incremental stages. For this process area, we did not perform an appraisal for the ERAM project, because it was at a stage in which product integration was not applicable. As shown in figure 10, the three remaining projects satisfied all criteria for the “performing” capability level (level 1) in this process area. None of the projects satisfied all criteria for the “managing” capability level (level 2). While the projects had differing weaknesses that contributed to this result, common weaknesses across most of the projects occurred in the areas of monitoring and controlling the product integration process and ensuring quality assurance of the process, as shown in the overview in table 58. As a result of this weakness, FAA is exposed to increased risk that product components will not be compatible, resulting in projects that will not meet cost, schedule, or performance goals. Looked at another way, of the 54 practices we evaluated in this process area, FAA projects had 49 practices that were fully or largely implemented and 5 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 59 through 64. Specifically, tables 59 and 60 provide results for VSCS; tables 61 and 62 provide results for ITWS; and tables 63 and 64 provide results for ASDE-X. The purpose of verification is to ensure that selected work products meet their specified requirements. This process area involves preparing for and performing tests and identifying corrective actions. Verification of work products substantially increases the likelihood that the product will meet the customer, product, and product-component requirements. As shown in figure 11, only one of four FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. As shown in the overview in table 65, key weaknesses in preparing and conducting peer reviews prevented the other three projects from achieving level 1. None of the four projects satisfied all criteria for the “managing” capability level (level 2). While all four projects had differing weaknesses that contributed to this result, common weaknesses across most of the projects occurred in the areas of monitoring and controlling the verification process and in ensuring quality assurance of the process. As a result of these weaknesses, FAA is exposed to increased risk that the product will not meet the user and mission requirements, increasing the likelihood that projects that will not meet cost, schedule, or performance goals. Looked at another way, of the 68 practices we evaluated in this process area, FAA projects had 51 practices that were fully or largely implemented and 17 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 66 through 73. Specifically, tables 66 and 67 provide results for VSCS; tables 68 and 69 provide results for ERAM; tables 70 and 71 provide results for ITWS; and tables 72 and 73 provide results for ASDE-X. The purpose of validation is to demonstrate that a product or product component fulfills its intended use when placed in its intended environment. Validation activities are vital to ensuring that the products are suitable for use in their intended operating environment. As shown in figure 12, all four FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. None of the four projects satisfied all criteria for the “managing” capability level (level 2). While all four projects had differing weaknesses that contributed to this result, common weaknesses across most of the projects occurred in the areas of monitoring and controlling the validation process and in ensuring quality assurance of the process, as shown in the overview in table 74. As a result of these weaknesses, FAA is exposed to increased risk that the project will not fulfill its intended use, thereby increasing the likelihood that the projects will not meet cost, schedule, or performance goals. Looked at another way, of the 56 practices we evaluated in this process area, FAA projects had 47 practices that were fully or largely implemented and 9 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 75 through 82. Specifically, tables 75 and 76 provide results for VSCS; tables 77 and 78 provide results for ERAM; tables 79 and 80 provide results for ITWS; and tables 81 and 82 provide results for ASDE-X. The purpose of configuration management is to establish and maintain the integrity of work products. This process area includes both the functional processes used to establish and track work product changes and the technical systems used to manage these changes. Through configuration management, accurate status and data are provided to developers, end users, and customers. As shown in figure 13, three of the four FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. The fourth project would have achieved level 1 if it had performed two more practices (see the overview in table 83 for details). Only one of the four projects satisfied all criteria for the “managing” capability level (level 2). While all four projects had differing weaknesses that contributed to this result, common weaknesses across some of the projects occurred in the areas of monitoring and controlling the process and in ensuring the quality assurance of the configuration management process, as shown in the overview in table 83. As a result of these weaknesses, FAA is exposed to increased risk that the project teams will not effectively manage their work products, resulting in projects that do not meet cost, schedule, or performance goals. Looked at another way, of the 68 practices we evaluated in this process area, FAA projects had 60 practices that were fully or largely implemented and 8 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 84 through 91. Specifically, tables 84 and 85 provide results for VSCS; tables 86 and 87 provide results for ERAM; tables 88 and 89 provide results for ITWS; and tables 90 and 91 provide results for ASDE-X. The purpose of process and product quality assurance is to provide staff and management with objective insights into processes and associated work products. This process area includes the objective evaluation of project processes and products against approved descriptions and standards. Through process and product quality assurance, the project is able to identify and document noncompliance issues and provide appropriate feedback to project members. As shown in figure 14, only one of the four FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. Weaknesses in the objective evaluation of designated performed processes, work products, and services against the applicable process descriptions, standards, and procedures prevented the projects from achieving level 1. None of the four projects satisfied all criteria for the “managing” capability level (level 2). Table 92 provides an overview of our appraisal results. As shown in the table, while the four projects had differing weaknesses that contributed to this result, common weaknesses across multiple projects occurred in the areas of establishing a plan, providing resources, training people, providing configuration management, identifying stakeholders, monitoring and controlling the process, ensuring quality assurance, and reviewing the status of the quality assurance process with higher level managers. As a result of these weaknesses, FAA is exposed to increased risk that the projects will not effectively implement key management processes, resulting in projects that will not meet cost, schedule, or performance goals, and that will not meet mission needs. Looked at another way, of the 56 practices we evaluated in this process area, FAA projects had 33 practices that were fully or largely implemented and 23 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 93 through 100. Specifically, tables 93 and 94 provide results for VSCS; tables 95 and 96 provide results for ERAM; tables 97 and 98 provide results for ITWS; and tables 99 and 100 provide results for ASDE-X. The purpose of measurement and analysis is to develop and sustain a measurement capability that is used to support management information needs. This process area includes the specification of measures, data collection and storage, analysis techniques, and the reporting of these values. This process allows users to objectively plan and estimate project activities and identify and resolve potential issues. As shown in figure 15, none of the four FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. Weaknesses in managing and storing measurement data, measurement specifications, and analysis results kept the projects from achieving level 1. Further, none of the four projects satisfied all criteria for the “managing” capability level (level 2). As shown in the overview in table 101, while the four projects had differing weaknesses that contributed to this result, common weaknesses across multiple projects occurred in the areas of establishing an organizational policy, establishing a plan, providing resources, assigning responsibility, training people, configuration management, identifying stakeholders, monitoring and controlling the process, ensuring quality assurance, and reviewing status with higher level management of the measurement and analysis process. As a result of these weaknesses, FAA is exposed to increased risk that the projects will not have adequate estimates of work metrics or a sufficient view into actual performance. This increases the likelihood that projects will not meet cost, schedule, or performance goals, and that projects will not meet mission needs. Looked at another way, of the 72 practices we evaluated in this process area, FAA projects had 30 practices that were fully or largely implemented and 42 practices that were partially or not implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 102 through 109. Specifically, tables 102 and 103 provide results for VSCS; tables 104 and 105 provide results for ERAM; tables 106 and 107 provide results for ITWS; and tables 108 and 109 provide results for ASDE-X. The purpose of supplier agreement management is to manage the acquisition of products. This process area involves determining the type of acquisition that will be used for the products acquired, selecting suppliers, establishing, maintaining, and executing agreements, accepting delivery of acquired products, and transitioning acquired products to the project, among other items. For this process area, we did not perform an appraisal for the VSCS or ITWS projects, because these projects were at stages in which supplier agreement management was not applicable. As shown in figure 16, both of the remaining FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. One of the two projects satisfied all criteria for the “managing” capability level (level 2). In not consistently managing this process, FAA is exposed to increased risk that projects will not be performed in accordance with contractual requirements, resulting in projects that will not meet cost, schedule, or performance goals, and systems that will not meet mission needs. Looked at another way, of the 34 practices we evaluated in this process area, FAA projects had 33 practices that were fully or largely implemented and 1 practice that was partially implemented. Table 110 provides an overview of the appraisal results. Additional details on each project’s appraisal results at successive capability levels are provided in tables 111 through 114. Specifically, tables 111 and 112 provide results for ERAM, and tables 113 and 114 provide results for ASDE-X. The purpose of the deployment, transition, and disposal process area is to place a product or service into an operational environment, transfer it to the customer and to the support organization, and deactivate and dispose of the replaced product or dispense with the service. This process area includes the design and coordination of plans and procedures for placement of a product or service into an operational or support environment and bringing it into operational use. It ensures that an effective support capability is in place to manage, maintain, and modify the supplied product or service. It further ensures the successful transfer of the product or service to the customer/stakeholder and the deactivation and disposition of the replaced capability. For this process area, we did not perform an appraisal for the VSCS or ERAM projects, because these projects were at stages in which deployment was not applicable. As shown in figure 17, both of the remaining FAA projects satisfied all criteria for the “performing” capability level (level 1) in this process area. Neither satisfied all criteria for the “managing” capability level (level 2). As shown in the overview in table 115, while the projects had differing weaknesses that contributed to this result, a common weakness across projects occurred in the area of monitoring and controlling the deployment process. As a result of this weakness, FAA is exposed to increased risk that the projects will not be delivered on time, resulting in projects that will not meet cost, schedule, or performance goals. Looked at another way, of the 32 practices we evaluated in this process area, FAA projects had 28 practices that were fully or largely implemented and 4 practices that were partially implemented. Additional details on each project’s appraisal results at successive capability levels are provided in tables 116 through 119. Specifically, tables 116 and 117 provide results for ITWS, and tables 118 and 119 provide results for ASDE-X. Since our 1997 report, the Federal Aviation Administration’s (FAA) process improvement initiative has grown tremendously in rigor and scope. In our earlier appraisal, we found that FAA’s performance of key processes was ad hoc and sometimes chaotic, whereas current results show that FAA projects are performing most key practices. However, these process improvement activities are not required throughout the air traffic organizations, and the recurring weaknesses we identified in our project- specific evaluations are due in part to the choices these projects were given in deciding whether to and how to adopt process improvement initiatives. Further, because of a recent reorganization, the new Air Traffic Organization’s commitment to this process improvement initiative is not certain. As a result, FAA is not consistent in its adoption and management of process improvement efforts, so that individual projects’ costs, schedules, and performance remain at risk. Without agencywide adoption of process improvement initiatives, the agency cannot increase the maturity of its organizational capabilities. Over the past several years, FAA has made considerable progress in improving its processes for acquiring and developing software and systems. Acting on our prior recommendations, in 1999, FAA established a centralized process improvement office that reports directly to the Chief Information Officer. This office led the government in an effort to integrate various standards and models into a single maturity model, called the integrated Capability Maturity Model (iCMM). In fact, FAA’s iCMM served as a demonstration for the Software Engineering Institute’s effort to integrate various models into its own Capability Maturity Model Integration (CMMI). The Chief Information Officer’s process improvement office also developed and sponsored iCMM-related training, and by late 2003, it had trained over 7,000 participants. The training offered ranges from overviews on how to use the model to more focused courses in such specific process areas as quality assurance, configuration management, and project management. The office also guides FAA organizations in using the model and leads appraisal teams in evaluating the process maturity of the projects and organizations that adopted the model. In addition to the Chief Information Officer–sponsored process improvement efforts, several of FAA’s business areas, including the business areas with responsibility for air traffic control (ATC) system acquisitions and operations, endorsed and set goals for process improvement activities using the iCMM. As a result, there has been a continuing growth over the years in the number of individual projects and umbrella organizations that adopted process improvement and the iCMM model. Specifically, the number of projects and organizations (which account for multiple projects) undergoing iCMM appraisals grew from 1 project in 1997, to 28 projects and 3 organizations by 2000, to 39 projects and 11 organizations by 2003. These projects and organizations have demonstrated improvements in process maturity. Under the iCMM model, in addition to achieving capability levels in individual process areas, entities can achieve successive maturity levels by demonstrating capabilities in a core set of process areas. FAA process improvement officials reported that by 2000, 10 projects and one organization had achieved iCMM maturity level 2. To date, 14 projects and three organizations have achieved iCMM maturity level 2, and one project and two organizations have achieved iCMM maturity level 3. Additionally, 13 projects and four organizations achieved capability levels 2 or 3 in one or more process areas. Moreover, in internal surveys, the programs and organizations pursuing process improvement have consistently reported enhanced productivity, higher quality, increased ability to predict schedules and resources, higher morale, and better communication and teamwork. These findings are reiterated by the Software Engineering Institute in its recent study of the benefits of using the CMMI model for process improvement. According to that study, organizations that implement such process improvements can achieve better project cost and schedule performance and higher quality products. Specifically, of the 12 cases that the Software Engineering Institute assessed, there were nine examples of cost related benefits, including reductions in the cost to find and fix a defect, and in overall cost savings; eight cases of schedule related benefits, including decreased time needed to complete tasks and increased predictability in meeting schedules; five cases of measurable improvements in quality, mostly related to reducing defects over time; three cases of improvements in customer satisfaction; and three cases showing positive return on investment from their CMMI- based process improvements. Leading organizations have found that in order to achieve advanced system management capabilities and to gain the benefits of more mature processes, an organization needs to institutionalize process improvement. Specifically, to be effective, an organization needs senior-level endorsement of its process improvement initiatives and consistency in the adoption and management of process improvement efforts. In recent years, FAA’s ATC-related organizations have encouraged process improvement through the iCMM model. Specifically, FAA’s acquisition policy calls for continuous process improvement and endorses the use of the iCMM model. Also, the former air traffic organizations set annual goals for improving maturity using the iCMM model in selected projects and process areas. For example, in 1997, the former ATC acquisition organization set a goal of having 11 selected projects achieve iCMM maturity level 2 by 1999 and maturity level 3 by 2001. While the projects did not meet the 1999 goal, several projects achieved level 2 in 2000, and most made improvements in selected process areas. However, FAA did not institutionalize the use of the iCMM model throughout the organization and, as a result, individual projects’ use and application of the model has been voluntary. Individual project teams could determine whether or not they would implement the model and which process areas to work on. In addition, project teams could decide when, if ever, to seek an appraisal of their progress in implementing the model. Because of this voluntary approach, to date less than half of the projects listed in FAA’s system architecture have sought appraisals in at least one process area. Specifically, of the 48 systems listed in FAA’s system architecture, only 18 have sought appraisals. Some of the mission critical systems that have not sought appraisals include an advanced radar system and air traffic information processing system. Another result of this voluntary approach is that individual projects are making uneven progress in core areas. For example, the four projects that we appraised ranged from capability levels 0 to 2 in the risk management process area: in other words, projects varied from performing only part of the basic process, to performing the basic process, to actively managing the process. As another example, all four of the projects we appraised captured some metrics on their performance. However, these metrics varied greatly from project to project in depth, scope, and usefulness. Individual weaknesses in key processes could lead to systems that do not meet the users’ needs, exceed estimated costs, or take longer than expected to complete. While FAA encouraged process improvement in the past, the agency’s current commitment to process improvement in its new Air Traffic Organization is not certain. FAA recently moved its air traffic–related organizations into a single, performance-based organization, the Air Traffic Organization, under the direction of a Chief Operating Officer. The Chief Operating Officer is currently reevaluating all policies and processes, and plans to issue new acquisition guidance in coming months. As a result, the Air Traffic Organization does not currently have a policy that requires organizations and project teams to implement process improvement initiatives such as the iCMM. It also does not have a detailed plan— including goals, metrics, and milestones—for implementing these initiatives throughout the organization, nor does it have a mechanism for enforcing compliance with any requirements—such as taking a project’s capability levels into consideration before approving new investments. Further, because the Air Traffic Organization’s commitment to the iCMM is not yet certain, FAA’s centralized process improvement organization is unable to define a strategy for improving and overseeing process improvement efforts in the Air Traffic Organization. Unless the Chief Operating Officer demonstrates a strong commitment to process improvement and establishes a consistent, institutionalized approach to implementing, enforcing, and evaluating this process improvement, FAA risks taking a major step backwards in its capabilities for acquiring ATC systems and software. That is, FAA may not be able to ensure that critical projects will continue to make progress in improving systems acquisition and development capabilities, and the agency is not likely to proceed to the more advanced capability levels which focus on organizationwide management of processes. Further, FAA may miss out on the benefits that process improvement models offer, such as better managed projects and improved product quality. Should this occur, FAA will continue to be vulnerable to project management problems including cost overruns, schedule delays, and performance shortfalls. The Federal Aviation Administration (FAA) has made considerable progress in implementing processes for managing software acquisitions. Key projects are performing most of the practices needed to reach a basic level of capability in process areas including risk management, project planning, project monitoring and control, and configuration management. However, recurring weaknesses in the areas of verification, quality assurance, and measurement and analysis prevented the projects from achieving a basic level of performance in these areas and from effectively managing these and other process areas. These weaknesses could lead to systems that do not meet the users’ needs, exceed estimated costs, or take longer than expected to complete. Further, because of the recurring weaknesses in measurement and analysis, senior executives may not receive the project status information they need to make sound decisions on major project investments. FAA’s process improvement initiative has matured in recent years, but more can be done to institutionalize improvement efforts. The Chief Information Officer’s centralized process improvement organization has developed an integrated Capability Maturity Model (iCMM) and demonstrated improvements in those using the model, but to date the agency has not ensured that projects and organizational units consistently adopt such process improvements. Specifically, the agency lacks a detailed plan—including goals, metrics, and milestones—for implementing these initiatives throughout the new Air Traffic Organization, and a mechanism for enforcing compliance with any requirements—such as taking a project’s capability level into consideration before approving new investments. With the recent move of FAA’s air traffic control–related organizations into a performance-based organization, the agency has an opportunity to reiterate the value of process improvement and to achieve the benefits of more mature processes. In the coming months, it will be critical for this new organization to demonstrate its commitment to process improvement through its policies, plans, goals, oversight, and enforcement mechanisms. Without such endorsement, the progress that FAA has made in recent years could dissipate. Given the importance of software-intensive systems to FAA’s air traffic control modernization program, we recommend that the Secretary of Transportation direct the FAA Administrator to ensure that the following five actions take place: The four projects that we appraised should take action to fully implement the practices that we identified as not implemented or partially implemented. The new Air Traffic Organization should establish a policy requiring organizations and project teams to implement iCMM or equivalent process improvement initiatives and a plan for implementing iCMM or equivalent process improvement initiatives throughout the organization. This plan should specify a core set of process areas for all projects, clear criteria for when appraisals are warranted, and measurable goals and time frames. The Chief Information Officer’s process improvement office, in consultation with the Air Traffic Organization, should develop a strategy for overseeing all air traffic projects’ progress to successive levels of maturity; this strategy should specify measurable goals and time frames. To enforce process improvement initiatives, FAA investment decision makers should take a project’s capability level in core process areas into consideration before approving new investments in the project. In its oral comments on a draft of this report, Department of Transportation and FAA officials generally concurred with our recommendations, and they indicated that FAA is pleased with the significant progress that it has achieved in improving the processes used to acquire software and systems. Further, these officials noted that FAA has already started implementing changes to address issues identified in the report. They said that progress is evident in both the improved scores, compared with our prior study, and also in the way FAA functions on a day-to-day basis. For example, these officials explained that FAA is now working better as a team because the organization is using cross-organizational teams that effectively share knowledge and best practices for systems acquisition and management. FAA officials also noted that the constructive exchange of information with us was very helpful to them in achieving progress, and they emphasized their desire to maintain a dialog with us to facilitate continued progress. Agency officials also provided technical corrections, which we have incorporated into this report as appropriate.
Since 1981, the Federal Aviation Administration (FAA) has been working to modernize its aging air traffic control (ATC) system. Individual projects have suffered cost increases, schedule delays, and performance shortfalls of large proportions, leading GAO to designate the program a high-risk information technology initiative in 1995. Because the program remains a high risk initiative, GAO was requested to assess FAA's progress in several information technology management areas. This report, one in a series responding to that request, has two objectives: (1) to evaluate FAA's capabilities for developing and acquiring software and systems on its ATC modernization program and (2) to assess the actions FAA has under way to improve these capabilities. FAA has made progress in improving its capabilities for acquiring software-intensive systems, but some areas still need improvement. GAO had previously reported in 1997 that FAA's processes for acquiring software were ad hoc and sometimes chaotic. Focusing on four mission critical air traffic projects, GAO's current review assessed system and software management practices in numerous key areas such as project planning, risk management, and requirements development. GAO found that these projects were generally performing most of the desired practices: of the 900 individual practices evaluated, 83 percent were largely or fully implemented. The projects were generally strong in several areas such as project planning, requirements management, and identifying technical solutions. However, there were recurring weaknesses in the areas of measurement and analysis, quality assurance, and verification. These weaknesses hinder FAA from consistently and effectively managing its mission critical systems and increase the risk of cost overruns, schedule delays, and performance shortfalls. To improve its software and system management capabilities, FAA has undertaken a rigorous process improvement initiative. In response to earlier GAO recommendations, in 1999, FAA established a centralized process improvement office, which has worked to help FAA organizations and projects to improve processes through the use of a standard model, the integrated Capability Maturity Model. This model, which is a broad model that integrates multiple maturity models, is used to assess the maturity of FAA's software and systems capabilities. The projects that have adopted the model have demonstrated growth in the maturity of their processes, and more and more projects have adopted the model. However, the agency does not require the use of this process improvement method. To date, less than half of FAA's major ATC projects have used this method, and the recurring weaknesses we identified in our project-specific evaluations are due in part to the choices these projects were given in deciding whether to and how to adopt this process improvement initiative. Further, as a result of reorganizing its ATC organizations to a performance-based organization, FAA is reconsidering prior policies, and it is not yet clear that process improvement will continue to be a priority. Without a strong senior-level commitment to process improvement and a consistent, institutionalized approach to implementing and evaluating it, FAA cannot ensure that key projects will continue to improve systems acquisition and development capabilities. As a result, FAA will continue to risk the project management problems--including cost overruns, schedule delays, and performance shortfalls--that have plagued past acquisitions.
You are an expert at summarizing long articles. Proceed to summarize the following text: Actions can also include compliance and monitoring, such as reviewing disclosures by exporters of possible export control violations, prelicense checks, and postshipment verifications. See GAO, Export Controls: Post-Shipment Verification Provides Limited Assurance That Dual-use Items Are Being Properly Used, GAO-04-357 (Washington, D.C.: Jan. 12, 2004); and Defense Trade: Arms Export Control System in the Post 9/11 Environment, GAO-05-234 (Washington, D.C.: Feb. 16, 2005). investigate, and take punitive action against potential violators of U.S. export control laws. These authorities provide the Federal Bureau of Investigation (FBI) and Immigration and customs Enforcement (ICE) with overlapping jurisdiction to investigate defense potential violations, and FBI, ICE, and Commerce’s Office of Export Enforcement (OEE) with overlapping jurisdiction to investigate dual-use potential violations. Inspections of items scheduled for export are routinely conducted at U.S. air, sea, and land ports, as part of the U.S. Customs and Border Protection (CBP) officer’s responsibilities for enforcing U.S. import and export control laws and regulations at our nation’s ports of entry. CBP’s enforcement activities include inspection of outbound cargo through a risk-based approach using CBP’s automated targeting systems to assess the risk of each shipment, review and validation of documentation presented for licensable items, detention of questionable shipments, and seizure of shipments and issuance of monetary penalties for items that are found to be in violation of U.S. export control laws. According to CBP officials, almost 3 million shipments per month are exported from the United States. Investigations of potential violations of export control laws for dual-use items are conducted by agents from OEE, ICE, and FBI. Investigations of potential export control violations involving defense items are conducted by ICE and FBI agents. OEE and ICE are authorized to investigate potential violations of dual-use items. ICE is also authorized to investigate potential violations of defense items. The FBI has authority to investigate any criminal violation of law not exclusively assigned to another agency, and is mandated to investigate and oversee export control violations with a counterintelligence concern. The investigative agencies have various tools for investigating potential violations (see table 2) and establishing cases for potential criminal or administrative punitive actions. Punitive actions, which are either criminal or administrative, are taken against violators of export control laws and regulations, and may involve U.S. or foreign individuals and companies. Criminal violations are those cases where the evidence shows that the exporter willfully violated export control laws. U.S. Attorneys’ Offices prosecute export control enforcement criminal cases in consultation with Justice’s National Security Division. These cases can result in imprisonment, fines, forfeitures, and other penalties. Punitive actions for administrative violations can include fines, suspension of an export license, or denial or debarment from exporting, and are imposed primarily by State or Commerce, depending on whether the violation involves the export of a defense or a dual-use item. For example, Commerce can impose the administrative sanction of placing parties acting contrary to the national security or foreign policy interests of the United State on a list that prevents their receipt of items subject to Commerce controls. The Treasury’s Office of Foreign Assets Control (OFAC) administers and enforces economic sanctions programs primarily against countries and groups of individuals, such as terrorists and narcotics traffickers. The sanctions can be either comprehensive or selective, using the blocking of assets and trade restrictions to accomplish foreign policy and national security goals. In some cases, both criminal and administrative penalties can be levied against an export control violator. In fiscal year 2010, Justice data showed that 56 individuals or companies were convicted of criminal violations of export control laws.and Commerce reported more than $25.4 million in administrative fines and penalties for fiscal year 2010. In 2011, over a third of the major U.S. export control enforcement and embargo-related criminal prosecutions involved the illegal transfer of U.S. military, nuclear, or technical data to Iran and China. Agencies use some form of a risk-based approach when allocating resources to export control enforcement as their missions are broader than export controls. As agencies can use these resources for other activities based on need, tracking resources used solely on export control enforcement activities is difficult. Only OEE allocates all of its resources exclusively to export control enforcement as that is its primary mission, and State and the Treasury have relatively few export control enforcement staff to track. Agencies’ risk-based resource allocation approach incorporates a variety of information, including workload and threat assessment data, but has not generally included data on resources used for export control enforcement activities as agencies did not implement systems to fully track this information until recently. Given the overlapping jurisdiction of several enforcement agencies, in some cities agencies have voluntarily created local task forces that bring together enforcement resources to work collectively on cases—informally leveraging resources. Agencies determine their missions based on statutes, policy, and directives, and articulate their fundamental mission in their strategic plans.with senior agency officials, agencies with primary export control enforcement responsibility have multiple missions that extend beyond export controls as shown in table 3, except for OEE. As such, these agencies are faced with balancing multiple priorities when allocating staff resources. Based on our review of these documents as well as discussions enforcement, and as such, is the only agency that has been able to fully track the resources used on these activities. To formulate its budget and allocate its investigators, OEE conducts threat assessments with a priority related to weapons of mass destruction, terrorism, and unauthorized military use; and analyzes export control enforcement case workload, including the prior year’s investigative statistics of arrests, indictments, and convictions. OEE also recently completed a field office expansion study to decide which cities would be the best locations for additional OEE field offices. In this study, OEE considered the volume of licensed and unlicensed exports and the type of high-tech items exported from different areas of the United States, and concluded that Atlanta, GA; Cincinnati, OH; Phoenix, AZ; and Portland, OR, were optimal locations, but has not received budget approval for expansion. CBP reemphasized outbound operations in the creation of its Outbound Enforcement Division in March 2009 to help prevent terrorist groups, rogue nations, and other criminal organizations from obtaining defense and dual-use commodities; enforce sanctions and trade embargoes; and increase exporter compliance. CBP determines the number of staff to allocate to outbound inspections through a risk- based approach based on prior workload and a quarterly threat matrix—which includes the volume of outbound cargo and passengers, port threat assessments, and the numbers and types of seizures and arrests at the ports for items such as firearms and currency. As of fiscal year 2010, CBP had allocated approximately 660 officers for outbound enforcement activities, but these officers can be used for other than export control-related activities at any time, when needed. For example, the Port of Baltimore has officers assigned to perform outbound activities at both the airport and seaport, some of which focus on the enforcement of controlled shipments in the seaport environment. According to the Port Director, any of these officers can be redirected at any time and often are assigned to the airport during the busy airline arrival times, to perform inbound inspection duties—based on priorities. Further, CBP does not track the hours that its officers across the country spend on export control enforcement activities, but is in the process of implementing a system to do so. CBP officials stated that determining the right mix of officers is complex and changes to its tracking system should allow for better planning and accounting for resources used for outbound activities in the future. ICE’s Homeland Security Investigations, Counter-Proliferation Investigations Unit focuses on preventing sensitive U.S. technologies and weapons from reaching the hands of adversaries and conducts export control investigations. To determine how many investigators it should allocate to this unit, ICE uses information including operational threat assessments and case data from the previous year, by field office, on total numbers of arrests, indictments, convictions, seizures, and investigative hours expended on export control investigations. For example, it assigns a tier level for each of its 70 field offices, based on threat assessments—ranging from 1 for the highest threat, resulting in a larger number of agents assigned to these offices; to 5 for the lowest threat, with a lower number of agents assigned. To further prioritize resources, in 2010, ICE established Counter Proliferation Investigations Centers in selected cities throughout the United States, with staff focused solely on combating illegal exports and illicit procurement networks seeking to acquire vital U.S. technology. ICE concluded that it needed to form these centers to combat the specialized nature of complex export control cases and determined that its previous method of distributing resources needed refinement, noting that some ICE field office managers had difficulty in balancing numerous competing programmatic priorities and initiatives. According to ICE officials, they plan to mitigate these concerns by having staff and facilities focused solely on export control enforcement cases, which will allow ICE to track and use this information to better determine future resource needs. The FBI, with both an investigative and intelligence mission, does not allocate resources solely for export control enforcement and officials told us they view these activities as a tool to gain intelligence that may lead to more robust cases. Nevertheless, cases involving export controls are primarily led by agents within the Counterintelligence Division. To determine the number of agents to allocate to this division, the FBI uses a risk management process and threat assessments. Several years ago, the FBI established at least one Counterintelligence squad in each of its 56 field offices. In July 2011, the FBI established a Counterproliferation Center, merging its Counterintelligence Division and its Weapons of Mass Destruction Directorate to better focus their efforts and resources. The FBI is in the process of implementing new codes within its resource tracking system to obtain better information on agents’ distribution of work, which will include time spent on investigations of defense and dual- use items. U.S. Attorneys’ Offices have discretion to determine the resources that they will allocate to export control enforcement cases, based on national priorities and the individual priorities of the 94 districts. These priorities include law enforcement concerns for their district and leads from investigative agencies. In response to the risk associated with national security, which includes export control enforcement cases, staffing for national security activities has increased and several districts have created national security sections within their office. In 2008, the Executive Office for U.S. Attorneys provided codes for charging time and labeling cases to obtain better information on the U.S. Attorneys’ Office distribution of work and those resources used for export control enforcement. However, some Assistant U.S. Attorneys told us that the time-keeping system is complicated as there are multiple codes and sub-categories in the tracking system and determining the correct codes is often subjective, making it difficult to track time spent on export control enforcement cases. Senior agency officials acknowledged this concern and are working with the U.S. Attorneys’ Offices to provide better guidance to improve the accuracy of attorney time charges. Other offices, such as State’s Office of the Legal Adviser for Political- Military Affairs and Commerce’s Office of the Chief Counsel for the Bureau of Industry and Security assist the enforcement agencies by providing legal support. For example, Commerce’s Office of the Chief Counsel pursues administrative enforcement actions against individuals and entities, but also reviews and advises on OEE recommendations for other administrative actions, such as temporary denials of licenses. In addition, DDTC and OFAC pursue administrative enforcement actions against violators. For example, OFAC administers and enforces U.S. economic and trade sanctions against designated foreign countries. While not all of staff in these offices are allocated to export control enforcement, these offices have relatively few staff to track. In addition to a domestic presence, most export control enforcement agencies also allocate resources overseas, but only Commerce allocates resources exclusively to export control enforcement. For example, Commerce maintains Export Control Officers in six locations abroad; Beijing and Hong Kong, China; Abu Dhabi, UAE; New Delhi, India; Moscow, Russia; and Singapore, to support its dual-use export control enforcement activities. Given that these officers have regional responsibilities, they cover additional locations. For example, the Export Control Officer assigned to Singapore also covers Malaysia and Indonesia. While other agencies have field locations in many overseas locations, these resources are to support the agencies’ broader missions and can be used for other duties based on the overseas mission priorities. For example, ICE has 70 offices in 47 foreign countries with more than 380 government and contract personnel which support all ICE enforcement activities, including export control. They can also be called upon to support various other DHS mission priorities. Specifically, the ICE agents we met with at the U.S. Embassy in Abu Dhabi also conduct activities in support of the full DHS mission and a great portion of their time is spent on visa security and a lesser amount on export control enforcement activities. The export control enforcement investigative agencies often have offices located in the same cities or geographic areas. In many of these cities, agencies’ officials said that they informally leverage each others’ tools, authorities, and resources to coordinate investigations and share intelligence through local task forces allowing them to use resources more efficiently and avoid duplicating efforts or interfering with each other’s cases. In 2007, Justice’s National Export Enforcement Initiative encouraged local field offices with a significant export control threat to create task forces or other alternatives to coordinate enforcement efforts in their area. Since then, almost 20 U.S. Attorneys’ Offices have created task forces of their own initiative or in conjunction with another enforcement agency, primarily in cities where these agencies are co- located to facilitate the investigation and prosecution of export control cases. Figure 1 shows the location of investigative agencies’ major field offices, as well as the location of export control enforcement task forces. Most of the task force members we met with in Baltimore, Los Angeles, and San Francisco stated that they see benefits beyond the coordination of cases, including investigating cases together and sharing resources. Baltimore’s Counterproliferation Task Force: ICE and the U.S. Attorneys’ Office created this Task Force in 2010 and it has representatives from each of the enforcement agencies located in the area, as well as the defense and intelligence communities. Task force officials stated that they develop and investigate export control cases together and, to enhance interagency collaboration, ICE has supplied work space, allowing agents from other agencies to work side-by-side to pursue leads and conduct investigations. Officials emphasized that the task force enables smaller agencies with fewer resources to leverage the work and expertise of the others to further their investigations and seek prosecutions. Sometimes the task force structures reap benefits that individual agencies cannot reach on their own, as exemplified by the Baltimore Counterproliferation Task Force. Among successes was a Maryland man sentenced to 8 months in prison followed by 3 years of supervised release for illegally exporting export-controlled night vision equipment. Los Angeles’ Export and Anti-proliferation Global Law Enforcement (EAGLE) Task Force: The U.S. Attorney established this Task Force in 2008 as a result of Justice’s counter-proliferation initiatives. Its purpose is to coordinate and develop expertise in export control investigations. Currently, there are over 80 members from 17 Los Angeles-based federal agencies. According to a task force official, the EAGLE task force has resulted in increased priority on export control investigations and improved interagency cooperation since it was established. For instance, the enforcement agencies are now more effectively sharing information in their respective databases. A task force official noted that enhanced access to these databases allows agencies to reduce duplication of license determination requests and to easily retrieve information on a particular person or commodity’s history using the search options. Additionally, through the task force structure, ICE and OEE agents have worked together to conduct additional outreach to industry affiliates. San Francisco’s Strategic Technology Task Force: According to officials, this task force was formed by FBI in 2004, with a primary focus on conducting joint export control outreach activities to academia and industry with the other investigative agencies (ICE and OEE). This task force also includes participation by the military service intelligence units and other law enforcement agencies. FBI task force leaders stated that this task force has helped to coordinate outreach activities as well as to generate investigative leads. According to an agent from the FBI’s San Jose field office, that office has a performance goal to conduct 90 percent of their export control-related investigations jointly with investigative agencies at ICE and Commerce. Although successful cases of joint collaboration among agencies can yield positive enforcement outcomes, as reported by the offices in the three cities we visited, the extent to which these alliances are effective is primarily dependent on personal dynamics of a given region, agency, and law enforcement culture. In addition, these local agency task forces for export control enforcement vary in structure, are voluntary, and do not exist nationwide. For example, while multiple investigative agencies have local offices in Chicago and Dallas with export control enforcement agents, agencies do not have a local task force in these cities to regularly coordinate on export control cases. While agency officials shared examples of agencies informally leveraging each other’s resources, officials told us that they do not factor in such resources when planning their own agency allocations for a variety of reasons, including each agency’s separate budgets and missions, which do not generally consider those of other agencies. Enforcement agencies face several challenges in investigating illicit transshipments, both domestically and overseas—including license determination delays; limited access in some overseas locations; and a lack of effectiveness measures that reflect the complexity and qualitative benefits of export control cases. Recognizing broader challenges in export control enforcement, the President announced the creation of a national export enforcement coordination center, which may help agencies address some of the challenges described below, but detailed plans to do so have yet to be developed. The current export control enforcement system poses several challenges that potentially reduce the effectiveness of activities and limit the identification and investigation of illicit transshipments. Export control enforcement agencies seek to keep defense and dual-use items from being illegally exported through intermediary countries or locations to an unauthorized final destination, such as Iran, but agencies face challenges that can impact their ability to investigate export control violations, both domestically and overseas. First, license determinations—which confirm whether an item is controlled and requires a license, and thereby help confirm whether an export control violation has occurred—can sometimes be delayed, potentially hindering investigations and prosecutions. Second, investigators have limited access to secure communications and cleared staff in several domestic field offices, which can limit their ability to share timely and important information. Third, agencies have limited access to ports and facilities overseas. Fourth, agencies lack consistent data to quantify and identify trends and patterns in illicit transshipments of U.S. export-controlled items. Lastly, investigative agencies lack measures of effectiveness that fully reflect the complexity and qualitative benefits of export control cases. License Determination Delays. To confirm whether a defense or dual-use item is controlled and requires a license, inspectors, investigators, and prosecutors request license determinations from the licensing agencies of State and Commerce. These license determinations are integral to enforcement agencies’ ability to seize items, pursue investigations, or seek prosecutions. DHS’s Exodus Command Center operates the Exodus Accountability Referral System—an ICE database that initiates, tracks, and manages enforcement agency requests for license determinations from the licensing agencies.identifies three different levels of license determinations: initial (to seize an item or begin an investigation), pre-trial (to obtain a search warrant, among other things), and trial (to be used during trial proceedings). The Exodus Command Center has established internal timeliness goals for receiving responses to requests for initial determinations within 3 days; pre-trial certifications within 45 days; and trial certifications within 30 days. However, as shown in table 5, these goals are often not met, which can create barriers for enforcement agencies in seizing shipments before they depart the United States; obtaining search warrants; and making timely arrests. Given the wide-ranging mission of most of the agencies involved in export control enforcement, it is essential that agencies track resources expended on export control inspections, investigations, and prosecutions to assess how these resources are contributing to fulfilling their missions and are focused on the highest priorities in export control enforcement. While agencies, such as DHS and Justice, have recognized the need to better track their resources, a more comprehensive approach, including enhanced measures of effectiveness, could help these and other enforcement agencies assess workload and efficiency in making resource allocations and in determining whether changes are warranted. The creation of the Export Enforcement Coordination Center presents such an opportunity for the entire export control enforcement community. The center has the potential to become more than a co-location of enforcement agencies, but can be a conduit to more effectively manage export control resources. As the center’s operation progresses, it has the opportunity to address ongoing challenges in export control enforcement, including reducing potential overlap in investigations, and help agencies to work as efficiently as possible, maximize available intelligence and agency investigative data, and measure the effectiveness of U.S. export control enforcement activities. Challenges presented by delays in license determinations can affect the inspection, investigation, and prosecution of export control cases but may be outside of the mission of the center since they primarily involve the licensing agencies. Having goals for processing license determinations can help establish transparency and accountability in the process. Given that the licensing agencies and the Exodus Command Center have not agreed to timeliness goals for responding to such requests, these agencies may benefit from collaborating to help improve the effectiveness of the process. To better inform management and resource allocation decisions, effectively manage limited export control enforcement resources, and improve the license determination process, we are making the following four recommendations: We recommend that the Secretary of Homeland Security and the Attorney General, as they implement efforts to track resources expended on export control enforcement activities, use such data to make resource allocation decisions. We recommend that the Secretaries of Commerce and Homeland Security as they develop and implement qualitative measures of effectiveness, ensure that these assess progress towards their overall goal of preventing or deterring illegal exports. We recommend that the Secretary of Homeland Security, in consultation with the departmental representatives of the Export Enforcement Coordination Center, including Commerce, Justice, State, and the Treasury leverage export control enforcement resources across agencies by building on existing agency efforts to track resources expended, as well as existing agency coordination at the local level; establish procedures to facilitate data sharing between the enforcement agencies and intelligence community to measure illicit transshipment activity; and develop qualitative and quantitative measures of effectiveness for the entire enforcement community to baseline and trend this data. We recommend that the Secretaries of Commerce and State, in consultation with the Secretary of Homeland Security, the Attorney General, and other agencies as appropriate, establish agreed upon timeliness goals for responding to license determination requests considering agency resources, the level of determination, the complexity of the request, and other associated factors. We provided a draft copy of this report to Commerce, DHS, DOD, Justice, State, and Treasury for their review and comment. Commerce, DHS, Justice, and State concurred with the report’s recommendations and, along with DOD, provided technical comments which we incorporated as appropriate. Treasury did not provide any comments on the report. As multiple agencies have responsibilities for export control enforcement, several of our recommendations call for these agencies to work together to effectively manage limited export control enforcement resources and to improve the license determination process. In their comments, Commerce and State agreed to work in consultation with DHS and Justice to establish timeliness goals for license determinations. In its comments, DHS stated its intent to work with the other agencies to improve the license determination process as well as take steps to deploy its resources in the most effective and efficient manner and provided target dates for completing these actions. In particular, DHS noted that ongoing tracking efforts by CBP and ICE will be used to improve their knowledge of resources expended on export control enforcement activities and that they will periodically review this information to determine the overall direction of the export control program. Additionally, DHS stated its intent to establish a working group with other agencies to develop performance measures related to export control enforcement to help estimate the effectiveness of all associated law enforcement activity. Written comments from Commerce, DHS, and State are reprinted in appendixes II, III, and IV, respectively. We are sending copies of this report to interested congressional committees, as well as the Secretaries of Commerce, Defense, Homeland Security, State, and Treasury as well as the Attorney General. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report is listed in appendix V. To determine how agencies allocate staff resources for export control enforcement activities, we interviewed cognizant officials and examined relevant documents such as agencies’ budgets, strategic plans, memorandum, and other documentation on resources. We interviewed officials about their resources at the headquarters of Commerce, DHS, Justice, State, and the Treasury. We also discussed with DOD officials their role in providing investigative support to agencies responsible for export control enforcement. We developed and used a set of structured questions to interview each agency’s resource planners to determine how they allocate resources, what information and factors they consider in resource allocation decisions, what their enforcement priorities are, whether they track resources expended on enforcement, if they had conducted an analysis of their resource need, and if they consider or leverage other agencies’ resources. We obtained applicable criteria including the Office of Management and Budget Circular A-11 and departmental guidance on resource allocation and tracking. We also reviewed previous GAO and inspector general reports regarding the Government Performance and Results Act (GPRA), as amended, and resource management for enforcement programs. To determine current resource levels, we obtained geographic locations of all domestic staff conducting export control enforcement, actual expenditures on export control enforcement activities, and information on staffing levels from each agency for fiscal years 2006 through 2010. We did not independently verify the accuracy of agency information on expenditures and staffing levels obtained, but we corroborated this information with cognizant agency officials. We considered agencies’ overall resources for the broad enforcement authorities and the resources allocated to export control enforcement specifically. Finally, we analyzed agencies’ budget requests, expenditures, and staff hours to determine agencies current resource commitment and how agencies have allocated resources to export control enforcement activities. To determine challenges that agencies face in investigating illicit transshipments and the potential impact of export control reform initiatives on enforcement activities, we interviewed cognizant officials, examined and analyzed relevant export control documents and statutes, and conducted sites visits both domestically and overseas. We interviewed officials about their enforcement priorities at the headquarters of Commerce, DHS, Justice, and State. We also discussed with DOD officials their role in providing license determination support to agencies responsible for export control enforcement. We developed and used a set of structured questions to interview enforcement agency officials in selected domestic and overseas locations and observed export enforcement operations at those locations that had air, land, and seaports. We selected sites to visit based on various factors, including geographical areas where all enforcement agencies were represented with a large percentage of investigative caseload; areas with a mix of defense and high-tech companies represented; ports with a high volume of trade of U.S. commodities; a large presence of aerospace, electronics, and software industries, and based on headquarters officials’ recommendations on key areas of export control enforcement activities both domestically and abroad. On the basis of these factors, we visited Irvine, Long Beach, Los Angeles, Oakland, San Francisco, and San Jose, CA; Washington, D.C.; and Baltimore, MD domestically. Internationally, we interviewed United States Embassy and Consulate officials and host government authorities in Hong Kong, Singapore, and in Abu Dhabi and Dubai in the United Arab Emirates (UAE). We received briefings on the export control systems from the Hong Kong Government’s Trade and Industry Department, Customs and Excise Tax Department, from Singapore’s Ministry of Foreign Affairs, Singapore’s Immigration and Customs Authority; as well as toured ports at these locations. We also received a briefing from the Hong Kong Customs Airport Command on air cargo and air-to-air transshipment of strategic commodities and visited the DHL Hub at the Hong Kong International Airport. In the UAE, we visited the Government of Sharjah, Department of Seaports & Customs, Hamriyah Free Zone Authority and met with the Director and Security and Safety Manager to discuss the Hamriyah Free Zone. We reviewed the findings and recommendations of past GAO reports, documentation from enforcement agencies, and interviewed U.S. government officials from these agencies as well as their field offices. We also met with several agency representatives of the Export Control Reform Task Force and reviewed recent White House press releases on the export reform initiatives. Further, we examined Federal Register notices on changing regulations related to the export control reform initiative. We conducted this performance audit from February 2011 through March 2012, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Belva Martin, (202) 512-4841 or [email protected]. In addition to the contact names above, John Neumann, Assistant Director; Lisa Gardner; Desiree Cunningham; Jungjin Park; Marie Ahearn; Roxanna Sun; Robert Swierczek; and Hai Tran made key contributions to this report.
The U.S. government controls the export of sensitive defense and dual-use items (having both military and commercial use). The five agencies primarily responsible for export control enforcement—the Departments of Commerce, Homeland Security (DHS), Justice, State and the Treasury—conduct inspections and investigations, and can levy punitive actions against violators. A challenging aspect of export control enforcement is the detection of illicit transshipments—the transfer of items from place of origin through an intermediary country to an unauthorized destination, such as Iran. In 2010, the President announced reforms to the U.S. export control system to address weaknesses found by GAO and others. GAO was asked to address how the export control enforcement agencies allocate resources, as well as the challenges they face and the potential impact of export control reform on enforcement activities. GAO reviewed documents and met with enforcement agency officials as well as with U.S. and foreign government and company officials in Hong Kong, Singapore, and the United Arab Emirates, which have a high volume of trade and have been identified as potential hubs for illicit transshipments. Agencies use a risk-based approach, including workload and threat assessment data, to allocate resources, but most do not fully track those used for export control enforcement activities. As their missions are broader than export controls, agencies can use staff resources for other activities based on need, making tracking resources used solely for export control enforcement difficult. Only Commerce’s Office of Export Enforcement allocates its resources exclusively to export control enforcement as that is its primary mission. Other agencies, such as State and the Treasury, have relatively few export control enforcement staff to track. While several agencies acknowledge the need to better track export enforcement resources and have taken steps to do so, they do not know the full extent of their use of these resources and do not use this information in resource allocation decisions. In some cities, agencies are informally leveraging export enforcement resources through voluntarily created local task forces that bring together enforcement resources to work collectively on export control cases. Enforcement agencies face several challenges in investigating illicit transshipments, both domestically and overseas, which potentially reduce the effectiveness of enforcement activities and limit the identification and investigation of illicit transshipments. These include: License Determination Delay s. License determinations—which confirm whether an item is controlled and requires a license, and thereby help confirm whether an export control violation has occurred—are often not timely, potentially hindering investigations and prosecutions. Limited Secure Communications and Cleared Staff . Investigators have limited access to secure communications and staff with high-level security clearances in several domestic field offices, limiting investigators’ ability to share timely and important information. Lack of Trend Data on Illicit Transshipments . While there is a good exchange of intelligence between enforcement agencies and the intelligence community—to seize shipments and take other actions against export control violators—officials noted that no formal process or means existed for these groups to collectively quantify and identify statistical trends and patterns relating to information on illicit transshipments. Lack of Effectiveness Measures Unique to the Complexity of Export Controls . Investigative agencies lack measures of effectiveness that fully reflect the complexity and qualitative benefits of export control cases. Some of these challenges may be addressed by ongoing export control reform initiatives, but reform presents both opportunities and challenges. Revising the control list could simplify the license determination process, but could also result in the need for increased enforcement activity overseas to validate the recipient of the items as fewer items may require U.S. government approval in advance of shipment. As most staff located overseas have other agency and mission-related priorities, their availability may be limited. The newly created national Export Enforcement Coordination Center is intended to help agencies coordinate their export control enforcement efforts as well as share intelligence and law enforcement information related to these efforts. However, it is unclear whether the center will address all of the challenges GAO found, as detailed plans for its operations are under development. GAO recommends that Commerce, DHS, Justice, and State take steps individually and with other agencies through the national Export Enforcement Coordination Center to better manage export control enforcement resources and improve the license determination process. Agencies agreed with GAO’s recommendations.
You are an expert at summarizing long articles. Proceed to summarize the following text: Under A-76, commercial activities may be converted to or from contractor performance either by direct conversion or by cost comparison. Under direct conversion, specific conditions allow commercial activities to be moved from government or contract performance without a cost comparison study (for example, for activities involving 10 or fewer civilians). Generally, however, commercial functions are to be converted to or from contract performance by cost comparison, whereby the estimated cost of government performance of a commercial activity is compared to the cost of contractor performance in accordance with the principles and procedures set forth in Circular A-76 and the supplemental handbook. As part of this process, the government identifies the work to be performed (described in the performance work statement), prepares an in-house cost estimate based on its most efficient organization, and compares it with the winning offer from the private sector. According to A-76 guidance, an activity currently performed in house is converted to performance by the private sector if the private offer is either 10 percent lower than the direct personnel costs of the in-house cost estimate or $10 million less (over the performance period) than the in- house cost estimate. OMB established this minimum cost differential to ensure that the government would not convert performance for marginal savings. The handbook also provides an administrative appeals process. An eligible appellant must submit an appeal to the agency in writing within 20 days of the date that all supporting documentation is made publicly available. Appeals are supposed to be adjudicated within 30 days after they are received. Under current law, private sector offerors who believe that the agency has not complied with applicable procedures have additional avenues of appeal. Specifically, they may file a bid protest with the General Accounting Office or file an action in a court of competent jurisdiction. Circular A-76 requires agencies to maintain annual inventories of commercial activities performed in house. A similar requirement was included in the 1998 Federal Activities Inventory Reform (FAIR) Act, which directs agencies to develop annual inventories of their positions that are not inherently governmental. The fiscal year 2000 inventory identified approximately 850,000 full-time equivalent commercial-type positions, of which approximately 450,000 were in DOD. OMB has recently indicated that it intends to expand its emphasis on A-76 governmentwide. In a March 9, 2001, memorandum to the heads and acting heads of departments and agencies, the OMB Deputy Director directed agencies to take action in fiscal year 2002 to directly convert or complete public/private competitions of not less than 5 percent of the full-time equivalent positions listed in their FAIR Act inventories. In 1999, DOD began to augment its A-76 program with what it terms strategic sourcing. Strategic sourcing may encompass consolidation, restructuring or reengineering activities, privatization, joint ventures with the private sector, or the termination of obsolete services. Strategic sourcing can involve functions or activities, regardless of whether they are considered inherently governmental, military essential, or commercial. I should add that these actions are recognized in the introduction to the A-76 handbook as being part of a larger body of options, in addition to A-76, that agencies must consider as they contemplate reinventing government operations. Strategic sourcing initially does not involve A-76 competitions between the public and the private sectors, and the Office of the Secretary of Defense and service officials have stressed that strategic sourcing may provide smarter decisions because it determines whether an activity should be performed before deciding who should perform it. However, these officials also emphasized that strategic sourcing is not intended to take the place of A-76 studies and that positions examined under the broader umbrella of strategic sourcing may be subsequently considered for study under A-76. DOD has been the leader among federal agencies in emphasizing A-76 studies. DOD’s use of A-76 waned from the late 1980s to the mid-1990s, then grew substantially in 1995 before falling again in1999 to the present. DOD is currently emphasizing a combination of A-76 and strategic sourcing. Available information indicates that A-76 studies in civilian agencies have been minimal, compared with those carried out in DOD. Unfortunately, no central database exists to provide information on the actual number of studies undertaken. From the late 1970s through the mid-1990s, DOD activities studied approximately 90,000 positions under A-76. However, program controversy and administrative and legislative constraints caused a drop in program emphasis from the late 1980s through 1995. In August 1995, the Deputy Secretary of Defense gave renewed emphasis to the A-76 program when he directed the services to make outsourcing of support activities a priority in an effort to reduce operating costs and free up funds to meet other priority needs. The effort was subsequently incorporated as a major initiative under the then-Secretary’s Defense Reform Initiative, and the program became known as competitive sourcing—in recognition of the fact that either the public or the private sector could win competitions. The number of positions planned for study and the time frames for accomplishing those studies have changed over time in response to difficulties in identifying activities to be studied. In 1997, DOD’s plans called for about 171,000 positions to be studied by the end of fiscal year 2003. In February 1999, we reported that DOD had increased this number to 229,000 but then found it reduced the number of positions to be studied in the initial years of the program. In August 2000, DOD decreased the total number of positions to be studied under A-76 to about 203,000, added about 42,000 Navy positions for consideration under strategic sourcing, and extended the program to fiscal year 2005. The introduction of strategic sourcing came about as the Navy—which was having difficulty identifying sufficient numbers of positions for study—sought and obtained approval to use this broader approach to help meet its A-76 study goals. In March 2001, DOD officials announced that they had again reduced the number of positions to be studied under A-76 to about 160,000 but increased the number of strategic sourcing positions to 120,000. DOD’s latest targets include strategic sourcing study goals for each of the military services. Tables 1 and 2 show the number of positions Defense components planned to study under A-76 and strategic sourcing as of March 2001. DOD’s data shown above show fewer positions planned to be studied under both A-76 and strategic sourcing in the out-years compared to those projected before 2001. To what extent these numbers will change on the basis of recent program direction from OMB for an expanded A-76 program emphasis is yet to be determined. As these numbers changed, so did savings targets. In 1999, for example, DOD projected that its A-76 program would produce $6 billion in cumulative savings from fiscal year 1997 to 2003 and $2.3 billion in net savings each year thereafter. In 2000, DOD projected savings of about $9.2 billion in 1997-2005, with recurring annual net savings of almost $2.8 billion thereafter. Additional savings were to come from strategic sourcing, which was expected to produce nearly $2.5 billion in cumulative savings by 2005 and recurring annual savings of $0.7 billion thereafter. Together, A-76 and strategic sourcing are expected to produce estimated cumulative savings of almost $11.7 billion, with about $3.5 billion in recurring annual net savings. More recent savings estimates have not yet been made available. Most importantly, these projected savings have become more than ambitious goals, when it developed its fiscal year 2000 budget, DOD reprogrammed about $11.2 billion of these anticipated savings into its modernization accounts, spread over future years’ planning period. Our work has consistently shown that while savings are being achieved by DOD’s A-76 program, it is difficult to determine precisely the magnitude of net savings. Furthermore, savings may be limited in the short term because up-front investment costs associated with conducting and implementing the studies must be absorbed before long-term savings begin to accrue. Several of our reports in recent years have highlighted these issues. We reported in March 2001 that A-76 competitions had reduced estimated costs of Defense activities primarily by reducing the number of positions needed to perform those activities under study. This is true regardless of whether the government’s in-house organization or the private sector wins the competition. Both government and private sector officials with experience in such studies have stated that, in order to be successful in an A-76 competition, they must seek to reduce the number of positions required to perform the function being studied. Related actions may include restructuring and reclassifying positions and using multiskill and multirole employees to complete required tasks. In December 2000, we reported on compliance with a congressional requirement that DOD report specific information of all instances since 1995 in which DOD missions or functions were reviewed under OMB Circular A-76. For the 286 studies for which it had complete information, the Department’s July 2000 report to the Congress largely complied with the reporting requirement. We noted that DOD had reported cost reductions of about 39 percent, yielding an estimated $290 million savings in fiscal year 1999. We also agreed that individual A-76 studies were producing savings but stressed that savings are difficult to quantify precisely for a number of reasons: Because of an initial lack of DOD guidance on calculating costs, baseline costs were sometimes calculated on the basis of average salaries and authorized personnel levels rather than on actual numbers. DOD’s savings estimates did not take into consideration the costs of conducting the studies and implementing the results, which of course must be offset before net savings begin to accrue. There were significant limitations in the database DOD used to calculate savings. Savings become more difficult to assess over time as workload requirements change, affecting program costs and the baseline from which savings were initially calculated. Our August 2000 report assessed the extent to which there were cost savings from nine A-76 studies conducted by DOD activities. The data showed that DOD realized savings from seven of the cases, but less than the $290 million that Defense components had initially projected. Each of the cases presented unique circumstances that limited our ability to precisely calculate savings—some suggested lower savings. Others suggested higher savings than initially identified. In two cases, DOD components had included cost reductions unrelated to the A-76 studies as part of their projected savings. Additionally, baseline cost estimates used to project savings were usually calculated using an average cost of salary and benefits for the number of authorized positions, rather than the actual costs of the positions. The latter calculation would have been more precise. In four of the nine cases, actual personnel levels were less than authorized. While most baseline cost estimates were based largely on personnel costs, up to 15 percent of the costs associated with the government’s most efficient organizations’ plans or the contractors’ offers were not personnel costs. Because these types of costs were not included in the baseline, a comparison of the baseline with the government’s most efficient organization or contractor costs may have resulted in understating cost savings. On the other hand, savings estimates did not reflect study and implementation costs, which reduced savings in the short term. DOD has begun efforts to revise its information systems to better track the estimated and actual costs of activities studied but not to revise previous savings estimates. DOD is also emphasizing the development of standardized baseline cost data to determine initial savings estimates. In practice, however, many of the cost elements that are used in A-76 studies will continue to be estimated because DOD lacks a cost accounting system to measure actual costs. Further, reported savings from A-76 studies will continue to have some element of uncertainty and imprecision and will be difficult to track in the out-years because workload requirements change, affecting program costs and the baseline from which savings are calculated. Given that the Department has reduced operating budgets on the basis of projected savings from A-76 studies, it is important that it have as much and as accurate information as possible on savings, including information on adjustments for up-front investment costs and other changes that may occur over time. In monitoring DOD’s progress in implementing the A-76 program, we have reported on a number of issues that should be considered when expanding emphasis on the A-76 process, either in DOD or at other government agencies. These issues include (1) the time required to complete studies, (2) the costs and other resources needed to conduct and implement studies, (3) the difficulties involved in selecting functions to compete, and (4) the timing of budget reductions in anticipation of projected savings. This last issue is a fundamental issue that is directly affected by the first three. Individual A-76 studies have taken longer than initially projected. In launching its A-76 program, some DOD components made overly optimistic assumptions about the amount of time needed to complete the competitions. For example, the Army projected that it would take 13-21 months to complete studies, depending on their size. The Navy initially projected completing its studies in 12 months. The numbers were subsequently adjusted upward, and the most recent available data indicate that studies take about 24 months for single-function and 27 months for multifunction studies. Once DOD components found that the studies were taking longer than initially projected, they realized that a greater investment of resources would be needed than originally planned to conduct the studies. In August 2000, we reported that DOD had increased its study cost estimates considerably since the previous year and had given greater recognition to the costs of implementing the results of A-76 studies. But we expressed concern that the Department was, in some instances, still likely underestimating those costs. The 2001 President’s budget showed a wide range of projected study costs, from about $1,300 per position studied in the Army to about $3,700 in the Navy. The Army, the Navy, and the Air Force provide their subcomponents $2,000 per position studied. Yet various officials believe these figures underestimate the costs of performing the studies. Officials at one Army major command estimated that their study costs would be at least $7,000 per position. One Navy command estimated its costs at between $8,500 and $9,500 per position. Our own assessment of a sample of completed A- 76 studies within the Army, the Navy, the Air Force, and Defense agencies showed that study costs ranged from an average of $364 to $9,000 per position. In addition to study costs, significant costs can be incurred in implementing the results of the competitions. Transition costs include the separation costs for civilian Defense employees who lose their jobs as a result of competitions won by the private sector or when in-house organizations require a smaller civilian workforce. Such separation costs include the costs of voluntary early retirement, voluntary separation incentives, and involuntary separations through reduction-in-force procedures. The President’s Budget for Fiscal Year 2001 included for the first time all Defense components’ estimated costs of implementing A-76 competitions and showed a total of about $1 billion in transition costs resulting from A-76 studies for fiscal years 1997-2005. Selecting and grouping functions and positions to compete can be difficult. Because most services faced growing difficulties in or resistance to finding enough study candidates to meet their A-76 study goals, DOD approved strategic sourcing as a way to complement its A-76 program. The Navy, for instance, had planned to announce 15,000 positions for study under A-76 in fiscal year 1998 but announced only 8,980 (about 60 percent). The following year it planned to announce 20,000 positions but announced 10,807 (about 54 percent). Although DOD’s FAIR Act inventory in 2000 identified commercial functions involving about 450,000 civilian positions, including about 260,000 associated with functions considered potentially eligible for competition, DOD does not expect to study all these functions. It remains to be seen to what extent the Department will significantly increase the number of functions it studies under A-76 in the near future. Department officials told us that the process identified few new functions and associated positions that could be studied under A-76 and that the increases in positions identified did not automatically translate into potentially large numbers of additional studies. The number of positions that will actually be studied for possible competition may be limited by a number of factors, including the following: Some activities are widely dispersed geographically. Having positions associated with commercial activities that are scattered over many locations may prevent some of them from being grouped for competition. Some work categorized as commercial may not be separated from inherently governmental or exempted work. In some cases, commercial activities classified as subject to competition are in activities that also contain work that is inherently governmental or exempt from competition, and the commercial workload may not always be separable from the workload performed by the exempted positions. Resources to conduct A-76 studies are limited. Officials of several military service commands have told us that they already have aggressive competition programs under way and that they lack sufficient resources and staff to conduct more competition studies in the near future. Even before it developed its FAIR Act inventory, DOD had already established goals for positions that the services and the Defense agencies should study and the savings to be achieved. For the most part, the services and Defense agencies delegated to their components responsibility for determining which functions to study. DOD then fell behind in its initial timetable for initiating and completing A-76 studies. Service officials told us that they had already identified as many competition opportunities as they could to meet savings goals under the A-76 program, and they believed that their capacity to conduct studies beyond those already underway or planned over the next few years was limited. Difficulties encountered in identifying A-76 study candidates, and in launching and completing the studies in the time frames initially projected, along with greater than expected costs associated with completing the studies, have led to concerns among various service officials about their ability to meet previously established savings targets. Some Defense officials have also voiced uncertainties over cost estimates and savings associated with strategic sourcing and the lack of a rigorous basis for projecting savings from this effort. Data included in the President’s fiscal year 2001 budget submission indicated that the Navy estimated that study costs and savings generated by strategic sourcing efforts would be virtually the same as those generated by A-76 studies for each position studied. Office of the Secretary of Defense officials have noted there is a wide variation in the types of initiatives that make up strategic sourcing and, consequently, that there can be wide variation in the resultant savings. These uncertainties led us to previously recommend that DOD periodically determine whether savings are being realized in line with the reductions in operating accounts that are based on projected savings. Increasing emphasis on A-76 has served to underscore concerns expressed by both government employees and industry about the process. Federal managers and others have been concerned about organizational turbulence that typically follows the announcement of A-76 studies. Government workers have been concerned about the impact of competition on their jobs, their opportunity for input into the competitive process, and the lack of parity with industry offerors to appeal A-76 decisions. Industry representatives have complained about the fairness of the process and the lack of a “level playing field” between the government and the private sector in accounting for costs. It appears that everyone involved is concerned about the time required to complete the studies. Amid these concerns over the A-76 process, the Congress enacted section 832 of the National Defense Authorization Act for Fiscal Year 2001. The legislation required the Comptroller General to convene a panel of experts to study the policies and procedures governing the transfer of commercial activities for the federal government from government personnel to a federal contractor. The Panel, which Comptroller General David Walker has elected to chair, includes senior officials from DOD, private industry, federal labor organizations, and OMB. Among other issues, the Panel will be reviewing the A-76 process and implementation of the FAIR Act. The Panel had its first meeting on May 8, 2001, and its first public hearing on June 11. At the hearing, over 40 individuals representing a wide spectrum of perspectives presented their views. The Panel currently plans to hold two additional hearings, on August 8 in Indianapolis, Indiana, and on August 15 in San Antonio, Texas. The hearing in San Antonio will specifically address OMB Circular A-76, focusing on what works and what does not in the use of that process. The hearing in Indianapolis will explore various alternatives to the use of A-76 in making sourcing decisions at the federal, state, and local levels. The Panel is required to report its findings and recommendations to the Congress by May 1, 2002. This concludes my statement. I would be pleased to answer any questions you or other members of the Subcommittee may have at this time. For further contacts regarding this statement, please contact Barry W. Holman at (202) 512-8412 or Marilyn Wasleski at (202) 512-8436. Individuals making key contributions to this statement include Debra McKinney, Stefano Petrucci, Thaddeus Rytel, Nancy Lively, Bill Woods, John Brosnan, and Stephanie May. DOD Competitive Sourcing: Effects of A-76 Studies on Federal Employees’ Employment, Pay, and Benefits Vary (GAO-01-388, Mar. 16, 2001). DOD Competitive Sourcing: Results of A-76 Studies Over the Past 5 Years (GAO-01-20, Dec. 7, 2000). DOD Competitive Sourcing: More Consistency Needed in Identifying Commercial Activities (GAO/NSIAD-00-198, Aug. 11, 2000). DOD Competitive Sourcing: Savings Are Occurring, but Actions Are Needed to Improve Accuracy of Savings Estimates (GAO/NSIAD-00-107, Aug. 8, 2000). DOD Competitive Sourcing: Some Progress, but Continuing Challenges Remain in Meeting Program Goals (GAO/NSIAD-00-106, Aug. 8, 2000). Competitive Contracting: The Understandability of FAIR Act Inventories Was Limited (GAO/GGD-00-68, Apr. 14, 2000). DOD Competitive Sourcing: Potential Impact on Emergency Response Operations at Chemical Storage Facilities Is Minimal (GAO/NSIAD-00-88, Mar. 28, 2000). DOD Competitive Sourcing: Plan Needed to Mitigate Risks in Army Logistics Modernization Program (GAO/NSIAD-00-19, Oct. 04, 1999). DOD Competitive Sourcing: Air Force Reserve Command A-76 Competitions (GAO/NSIAD-99-235R, Sept. 13, 1999). DOD Competitive Sourcing: Lessons Learned System Could Enhance A-76 Study Process (GAO/NSIAD-99-152, July 21, 1999). Defense Reform Initiative: Organization, Status, and Challenges (GAO/NSIAD-99-87, Apr. 21, 1999). Quadrennial Defense Review: Status of Efforts to Implement Personnel Reductions in the Army Materiel Command (GAO/NSIAD-99-123, Mar. 31, 1999). Defense Reform Initiative: Progress, Opportunities, and Challenges (GAO/T-NSIAD-99-95, Mar. 2, 1999). Force Structure: A-76 Not Applicable to Air Force 38th Engineering Installation Wing Plan (GAO/NSIAD-99-73, Feb. 26, 1999). Future Years Defense Program: How Savings From Reform Initiatives Affect DOD’s 1999-2003 Program (GAO/NSIAD-99-66, Feb. 25, 1999). DOD Competitive Sourcing: Results of Recent Competitions (GAO/NSIAD-99-44, Feb. 23, 1999). DOD Competitive Sourcing: Questions About Goals, Pace, and Risks of Key Reform Initiative (GAO/NSIAD-99-46, Feb. 22, 1999). OMB Circular A-76: Oversight and Implementation Issues (GAO/T-GGD-98-146, June 4, 1998). Quadrennial Defense Review: Some Personnel Cuts and Associated Savings May Not Be Achieved (GAO/NSIAD-98-100, Apr. 30, 1998). Competitive Contracting: Information Related to the Redrafts of the Freedom From Government Competition Act (GAO/GGD/NSIAD-98-167R, Apr. 27, 1998). Defense Outsourcing: Impact on Navy Sea-Shore Rotations (GAO/NSIAD-98-107, Apr. 21, 1998). Defense Infrastructure: Challenges Facing DOD in Implementing Defense Reform Initiatives (GAO/T-NSIAD-98-115, Mar. 18, 1998). Defense Management: Challenges Facing DOD in Implementing Defense Reform Initiatives (GAO/T-NSIAD/AIMD-98-122, Mar. 13, 1998). Base Operations: DOD’s Use of Single Contracts for Multiple Support Services (GAO/NSIAD-98-82, Feb. 27, 1998). Defense Outsourcing: Better Data Needed to Support Overhead Rates for A-76 Studies (GAO/NSIAD-98-62, Feb. 27, 1998). Outsourcing DOD Logistics: Savings Achievable But Defense Science Board’s Projections Are Overstated (GAO/NSIAD-98-48, Dec. 8, 1997). Financial Management: Outsourcing of Finance and Accounting Functions (GAO/AIMD/NSIAD-98-43, Oct. 17, 1997).
This testimony discusses the Department of Defense's (DOD) use of the Office of Management and Budget's Circular A-76, which establishes federal policy for the performance of recurring commercial activities. DOD has been a leader among federal agencies in the use of the A-76 process and at one point planned to use the process to study more than 200,000 positions over several years. However, the number of positions planned for study has changed over time and the Department recently augmented its A-76 program with what it terms strategic sourcing. DOD has saved money through the A-76 process primarily by reducing the number of in-house positions. Yet, GAO has repeatedly found that it is extremely difficult to measure the precise amount of savings because available data has been limited and inconsistent. The lessons learned from DOD's A-76 program include the following: (1) studies have generally taken longer than initially expected, (2) studies have generally required higher costs and resources than initially projected, (3) finding and selecting functions to compete can be difficult, and (4) making premature budget cuts on the assumption of projected savings can be risky. Both government groups and the private sector have expressed concerns about the fairness, adequacy, costs, and timeliness of the A-76 process.
You are an expert at summarizing long articles. Proceed to summarize the following text: CPP was the primary initiative under TARP for stabilizing the financial markets and banking system. Treasury created the program in October 2008 to stabilize the financial system by providing capital on a voluntary basis to qualifying regulated financial institutions through the purchase of senior preferred shares and subordinated debt. On October 14, 2008, Treasury allocated $250 billion of the $700 billion in overall TARP funds for CPP but adjusted its allocation to $218 billion in March 2009 to reflect lower estimated funding needs based on actual participation and the expectation that institutions would repay their investments. The program was closed to new investments on December 31, 2009, and, in total, Treasury invested $205 billion in 707 financial institutions over the life of the program. Through June 30, 2010, 83 institutions had repaid about $147 billion in CPP investments, including 76 institutions that repaid their investments in full. Under CPP, qualified financial institutions were eligible to receive an investment of between 1 and 3 percent of their risk-weighted assets, up to a maximum of $25 billion. In exchange for the investment, Treasury generally received shares of senior preferred stock that were due to pay dividends at a rate of 5 percent annually for the first 5 years and 9 percent annually thereafter. In addition to the dividend payments, EESA required the inclusion of warrants to purchase shares of common stock or preferred stock, or a senior debt instrument to give taxpayers additional protection against losses and an additional potential return on the investments. Institutions are allowed to repay CPP investments with the approval of their primary federal bank regulators and afterward to repurchase warrants at fair market value. While this was Treasury’s program, the federal bank regulators played a key role in the CPP application and approval process. The federal banking agencies that were responsible for receiving and reviewing CPP applications and recommending approval or denial were the Federal Reserve, which supervises and regulates banks authorized to do business under state charters and that are members of the Federal Reserve System, as well as bank and financial holding companies; FDIC, which provides primary federal oversight of any state-chartered banks insured by FDIC that are not members of the Federal Reserve System; OCC, which is responsible for chartering, regulating, and supervising commercial banks with national charters; and OTS, which charters federal savings associations (thrifts) and regulates and supervises federal and state thrifts and savings and loan holding companies. Treasury, in consultation with the federal banking regulators, developed a standardized framework for processing applications and disbursing CPP funds. Treasury encouraged financial institutions that were considering applying to CPP to consult with their primary federal bank regulators. The bank regulators also had an extensive role in reviewing the applications of financial institutions applying for CPP and making recommendations to Treasury. Eligibility for CPP funds was based on the regulator’s assessment of the applicant’s strength and viability, as measured by factors such as examination ratings, financial performance ratios, and other mitigating factors, without taking into account the potential impact of TARP funds. Institutions deemed to be the strongest, such as those with the highest examination ratings, received presumptive approval from the banking regulators, and their applications were forwarded to Treasury. Institutions with lower examination ratings or other concerns that required further review were referred to the interagency CPP Council, which was composed of representatives from the four banking regulators, with Treasury officials as observers. The CPP Council evaluated and voted on the applicants, and applications from institutions that received “approval” recommendations from a majority of the regulatory representatives were forwarded to Treasury. Treasury provided guidance to regulators and the CPP Council to use in assessing applicants that permitted consideration of factors such as signed merger agreements or confirmed investments of private capital, among other things, to offset low examination ratings or other weak attributes. Finally, institutions that the banking regulators determined to be the weakest and ineligible for a CPP investment, such as those with the lowest examination ratings, were to receive a presumptive denial recommendation. Figure 1 provides an overview of the process for assessing and approving CPP applications. The banking regulator or the CPP Council sent approval recommendations to Treasury’s Investment Committee, which comprised three to five senior Treasury officials, including OFS’s chief investment officer (who served as the committee chair) and the assistant secretaries for financial markets, economic policy, financial institutions, and financial stability at Treasury. After receiving recommended applications from regulators or the CPP Council, OFS reviewed documentation supporting the regulators’ recommendations but often collected additional information from regulators and the council before submitting applications to the Investment Committee. The Investment Committee could also request additional analysis or information in order to clear any concerns before deciding on an applicant’s eligibility. After completing its review, the Investment Committee made recommendations to the Assistant Secretary for Financial Stability for final approval. Once the Investment Committee recommended preliminary approval, Treasury and the approved institution initiated the closing process to complete the legal aspects of the investment and disburse the CPP funds. At the time of the program’s announced establishment, nine major financial institutions were initially included in CPP. While these institutions did not follow the application process that was ultimately developed, Treasury included these institutions because federal banking regulators and Treasury considered them to be essential to the operation of the financial system, which at the time had effectively ceased to function. At the time, these nine institutions held about 55 percent of U.S. banking assets and provided a variety of services, including retail and wholesale banking, investment banking, and custodial and processing services. According to Treasury officials, the nine financial institutions agreed to participate in CPP in part to signal the importance of the program to the stability of the financial system. Initially, Treasury approved $125 billion in capital purchases for these institutions and completed the transactions with eight of them on October 28, 2008, for a total of $115 billion. The remaining $10 billion was disbursed after the merger of Bank of America Corporation and Merrill Lynch & Co., Inc., was completed in January 2009. The institutions that received CPP capital investments varied in terms of ownership type, location, and size. The 707 institutions that received CPP investments were split almost evenly between publicly held and privately held institutions, with slightly more private firms. They included state- chartered and national banks and U.S. bank holding companies located in 48 states, the District of Columbia, and Puerto Rico (see fig. 2). Most states had fewer than 20 CPP firms, but 13 states had 20 or more. California had the most, with 72, followed by Illinois (45), Missouri (32), North Carolina (31), and Pennsylvania (31). Montana and Vermont were the only 2 states that did not have institutions that participated in CPP. The total amount of CPP funds disbursed to institutions also varied by state. The amount of CPP funds invested in institutions in most states was less than $500 million, but institutions in 17 states received more than $1 billion each. Institutions in states that serve as financial services centers such as New York and North Carolina received the most CPP funds. The median amount of CPP funds invested in institutions by state was $464 million. The size of CPP institutions also varied widely. The risk-weighted assets of firms we reviewed that were funded through April 30, 2009, ranged from $10 million to $1.4 trillion. However, most of the institutions were relatively small. For example, about half of the firms that we reviewed had risk-weighted assets of less than $500 million, and almost 70 percent had less than $1 billion. Only 30 percent were medium to large institutions (more than $1 billion in risk-weighted assets). Because the investment amount was tied to the firm’s risk-weighted assets, the amount that firms received ranged widely, from about $300,000 to $25 billion. The average investment amount for all of the 707 CPP participants was $290 million, although half of the institutions received less than $11 million. The 25 largest institutions received almost 90 percent of the total amount of CPP investments, and 9 of these firms received almost 70 percent of the funds. The characteristics Treasury and regulators used to evaluate applicants indicated that approved institutions had bank or thrift examination ratings that generally were satisfactory, or within CPP guidelines. Treasury and regulators used various measures of institutional strength and financial condition to evaluate applicants. These included supervisory examination ratings and financial performance ratios assessing an applicant’s capital adequacy and asset quality. While some examination results were more than a year old, regulatory officials told us that they had taken steps to mitigate the effect of these older ratings, such as collecting updated information. Almost all of the 567 institutions we reviewed had overall examination ratings for their largest bank or thrift that were satisfactory or better (see fig. 3). The CAMELS ratings range from 1 to 5, with 1 indicating a firm that is sound in every respect, 2 denoting an institution that is fundamentally sound, and 3 or above indicating some degree of supervisory concern. Of the CPP firms that we reviewed, 82 percent had an overall rating of 2 from their most recent examination before applying to CPP, and an additional 11 percent had the strongest rating. Seven percent had an overall rating of 3 and no firms had a weaker rating. We also found relatively small differences in overall examination ratings for institutions by size or ownership type. For example, institutions that were above and below the median risk-weighted assets of $472 million both had average overall ratings of about 2. Also, public and private firms both had average overall examination ratings of about 2. Bank or thrift examination ratings for individual components—such as asset quality and liquidity—exhibited similar trends. In particular, each of the individual components had an average rating of around 2. Institutions tended to have weaker ratings for the earnings component, which had an average of 2.2, than for the other components, which averaged between 1.8 and 1.9. Public and private institutions exhibited similar results for the average component ratings, although private institutions tended to have stronger ratings on all components except for earnings and sensitivity to market risk. Differences in average ratings by bank size also were small. For example, smaller institutions had stronger average ratings for the capital and asset quality components, but larger institutions had stronger average ratings for earnings and sensitivity to market risk. Holding companies receiving CPP investments typically also had satisfactory or better examination ratings. The Federal Reserve uses its own rating system when evaluating bank holding companies. Almost 80 percent of holding companies receiving CPP funds had an overall rating of 2 (among those with a rating), and an additional 14 percent had an overall rating of 1. The individual component ratings for holding companies (for example, for risk management, financial condition, and impact) also were comparable with overall ratings, with most institutions for which we could find a rating classified as satisfactory or better. Specifically, over 90 percent of the ratings for each of the components were 1 or 2, with most rated 2. Many examination ratings were more than a year old, a fact that could limit the degree to which the ratings accurately reflect the institutions’ financial condition, especially at a time when the economy was deteriorating rapidly. Specifically, about 25 percent of examination ratings were older than 1 year prior to the date of application, and 5 percent were more than 16 months old. On average, examination ratings were about 9 months older than the application date. Regulators used examination ratings as a key measure of an applicant’s financial condition and viability, and the age of these ratings could affect how accurately they reflect the institutions’ current state. For example, assets, liabilities, and operating performance generally are affected by the economic environment and depend on many factors, such as institutional risk profiles. Stressed market conditions such as those existing in the broad economy and financial markets during and before CPP implementation could be expected to have negative impacts on many of the applicants, making the age of examination ratings a critical factor in evaluating the institutions’ viability. Further, some case decision files for CPP firms were missing examination dates. Specifically, 104 applicants’ case decision files out of the 567 we reviewed lacked a date for the most recent examination results. Treasury and regulatory officials told us that they took various actions to collect information on applicants’ current condition and to mitigate any limitations of older examination results. Efforts to collect additional information on the financial condition of applicants included waiting for results of scheduled examinations or relying on preliminary CAMELS exam results, reviewing quarterly financial results such as recent information on asset quality, and sometimes conducting brief visits to assess applicants’ condition. Officials from one regulator explained that communication with the agency’s regional examiners and bank management on changes to the firm’s condition was the most important means of allaying concerns about older examination results. However, officials from another regulator stated that they did use older examination ratings, depending on the institution’s business model, lending environment, banking history, and current loan activity. For example, the officials said they would use older ratings if the institution was a small community bank with a history of conservative underwriting standards and was not lending in a volatile real estate market. As with the examination ratings, almost all of the institutions we reviewed had a rating for compliance with the Community Reinvestment Act (CRA) of satisfactory or better. Over 80 percent of firms received a satisfactory rating and almost 20 percent had an outstanding rating. Only two institutions had an unsatisfactory rating. Average CRA ratings also were similar across institution types and sizes. Performance ratios for the CPP firms we reviewed varied but typically were well within CPP guidelines. In assessing CPP applicants, Treasury and regulators focused on a variety of ratios based on regulatory capital levels, and institutions generally were well above the minimum required levels for these ratios. Regulators generally used performance ratio information from regulatory filings for the second or third quarters of 2008. Two of these ratios are based on a key type of regulatory capital known as Tier 1, which includes the core capital elements that are considered the most reliable and stable, primarily common stock and certain types of preferred stock. Specifically, for the Tier 1 risk-based capital ratio, banks or thrifts and holding companies had average ratios that were more than double the regulatory minimum of 4 percent with only one firm below that minimum level. Further, only two institutions were below 6.5 percent (see fig. 4). Although almost all firms had Tier 1 risk-based capital ratios that exceeded the minimum level, the ratios ranged widely, from 3 percent to 43 percent. Similarly, banks or thrifts and holding companies had average Tier 1 leverage ratios that were more than double the required 4 percent, and only 3 firms were below 4 percent. The ratios also ranged widely, from 2 percent to 41 percent. Finally, for the total risk-based capital ratio, banks or thrifts and holding companies had average ratios of 12 percent, well above the 8 percent minimum, and only two firms were below 8 percent. These ratios ranged from 4 percent to 44 percent. Asset-based performance ratios for most CPP institutions also generally remained within Treasury’s guidelines, although more firms did not meet the criteria for these ratios than did not meet the criteria for capital ratios. Treasury and the regulators established maximum guideline amounts for the three performance ratios relating to assets that they used to evaluate applicants. These ratios measure the concentration of troubled or risky assets as a share of capital and reserves—classified assets, nonperforming loans (including non-income-generating real estate, which is typically acquired through foreclosure), and construction and development loans. For each of these performance ratios, both the banks or thrifts and holding companies had average ratios that were less than half of the maximum guideline, well within the specified limits. For example, banks/thrifts and holding companies had average ratios of 25 and 32 percent, respectively, for classified assets, which had a maximum guideline of 100 percent. The substantial majority of banks or thrifts and holding companies also were well below the maximum guidelines for the asset ratios. For example, almost 90 percent of banks/thrifts and over 80 percent of holding companies had classified assets ratios below 50 percent. However, while only 3 firms missed the guidelines for any of the capital ratios, 38 banks/thrifts and holding companies missed the nonperforming loan ratio, 8 missed the construction and development loan ratio, and 1 missed the classified assets ratio. A small group of CPP participants exhibited weaker attributes relative to other approved institutions (see table 1). For most of these cases, Treasury or regulators described factors that mitigated the weaknesses and supported the applicant’s viability. Specifically, we identified 66 CPP institutions—12 percent of the firms we reviewed—that either (1) did not meet the performance ratio guidelines used to evaluate applicants, (2) had an unsatisfactory overall bank or thrift examination rating, or (3) had a formal enforcement action involving safety and soundness concerns. We use these attributes to identify these 66 firms as marginal institutions, although the presence of these attributes does not necessarily indicate that a firm was not viable or that it was ineligible for CPP participation. However, they generally may indicate firms that either had weaker attributes than other approved firms or required closer evaluation by Treasury and regulators. Nineteen of the institutions met multiple criteria, including those that missed more than one performance ratio for the largest bank/thrift or holding company. The most common criteria for the firms identified as marginal was an unsatisfactory overall examination rating or an unsatisfactory nonperforming loan ratio. A far smaller number of firms exceeded the construction and development loan ratio or had experienced a formal enforcement action related to safety and soundness concerns. One bank and two holding companies missed the capital or classified assets ratios. In their evaluations of CPP applicants, Treasury and regulators documented their reasons for approving institutions with marginal characteristics. They typically identified three types of mitigating factors that supported institutions’ overall viability: (1) the quality of management and business practices; (2) the sufficiency of capital and liquidity; and (3) performance trends, including asset quality. The most frequently cited attributes related to management quality and capital sufficiency. High-quality management and business practices. In evaluating marginal applicants, regulators frequently considered the experience and competency of the applicants’ senior management team. Officials from one bank regulator said that they might be less skeptical of an applicant’s prospects if they believed it had high-quality management. For example, they used their knowledge of institutions and the quality of their management to mitigate economic concerns for banks in the geographic areas most severely affected by the housing market decline. Commonly identified strengths included the willingness and ability of management to respond quickly to problems and concerns that regulators identified such as poor asset quality or insufficient capital levels. The evaluations of several marginal applicants described management actions to aggressively address asset quality problems as an indication of an institution’s ability to resolve its weaknesses. Regulators also had a positive view of firms whose boards of directors implemented management changes such as replacing key executives or hiring more experienced staff in areas such as credit administration. Finally, regulators evaluated the quality of risk management and lending practices in determining management strength. Capital and liquidity. Regulators often reviewed the applicant’s capital and liquidity when evaluating whether an institution’s weaknesses might affect its viability. In particular, regulators and Treasury considered the sufficiency of capital to absorb losses from bad assets and the ability to raise private capital. As instructed by Treasury guidance, regulators evaluated an institution’s capital levels prior to the addition of any CPP investment. Although an institution might have high levels of nonperforming loans or other problem assets, regulators’ concerns about viability might be eased if it also had a substantial amount of capital available to offset related losses. Likewise, capital from private sources could shore up an institution’s capital buffers and provide a signal to the market that it could access similar sources if necessary. When evaluating the sufficiency of a marginal applicant’s capital, regulators also assessed the amount of capital relative to the firm’s risk profile, the quality of the capital, and the firm’s dependence on volatile funding sources. Institutions with a riskier business model that included, for instance, extending high-risk loans or investing in high-risk assets generally would require higher amounts of capital as reserves against losses. Conversely, an institution with a less risky strategy or asset base might need somewhat less capital to be considered viable. Regulators reviewed the quality of a firm’s capital because some forms of capital, such as common shareholder’s equity, can absorb losses more easily than other types, such as subordinated debt or preferred shares, which may have restrictions or limits on their ability to take losses. Finally, regulators considered the nature of a firm’s funding sources. They viewed firms that financed their lending and other operations with stable funding sources, such as core deposit accounts or long-term debt, as less risky than firms that obtained financing through brokered deposits or wholesale funding, which could be more costly or might need to be replaced more frequently. Performance trends. Regulators also examined recent trends in performance when evaluating marginal applicants. For example, regulators considered strong or improving trends in asset quality, earnings, and capital levels, among others, as potentially favorable indicators of viability. These trends included reductions in nonperforming and classified assets, consistent positive earnings, reductions in commercial real estate concentrations, and higher net interest margins and return on assets. In some cases, regulators identified improvements in banks’ performance through preliminary examination ratings. Officials from one bank regulator stated that the agency refrained from making recommendations until it had recent and complete examination data. For example, if an examination was scheduled for an applicant that had raised regulatory concerns or questions, the agency would wait for the updated results before completing its review and making a recommendation to Treasury. Regulators and Treasury raised specific questions about the viability of a small number of institutions that ultimately were approved and received their CPP investments between December 19, 2008, and March 27, 2009. Most of the questions about viability involved poor asset quality, such as nonperforming loans or bad investments, and lending that was highly concentrated in specific product types, such as commercial real estate (see table 2). For these institutions, various mitigating factors were used to provide support for the firm’s ultimate approval. For example, regulators and Treasury identified the addition of private capital, strong capital ratios, diversification of lending portfolios, and updated examination results as mitigating factors in approving the institutions. One of these institutions had weaker characteristics than the others, and regulators and Treasury appeared to have more significant concerns about its viability. Ultimately, regulators and the CPP Council recommended approval of this institution based, in part, on criteria in Section 103 of EESA, which requires Treasury to consider providing assistance to financial institutions having certain attributes such as serving low- and moderate-income populations and having assets less than $1 billion. Through July 2010, 4 CPP institutions had failed, but an increasing number of CPP firms have missed their scheduled dividend or interest payments, requested to have their investments restructured by Treasury, or appeared on FDIC’s list of problem banks. First, the number of institutions missing the dividend or interest payments due on their CPP investments has increased steadily, rising from 8 in February 2009 to 123 in August 2010, or 20 percent of existing CPP participants. Between February 2009 and August 2010, 144 institutions did not pay at least one dividend or interest payment by the end of the reporting period in which they were due, for a total of 413 missed payments. As of August 31, 2010, 79 institutions had missed three or more payments and 24 had missed five or more. Through August 31, 2010, the total amount of missed dividend and interest payments was $235 million, although some institutions made their payments after the scheduled payment date. Institutions are required to pay dividends only if they declare dividends, although unpaid cumulative dividends accrue and the institution must pay the accrued dividends before making dividend payments to other types of shareholders in the future, such as holders of common stock. Federal and state bank regulators also may prevent their supervised institutions from paying dividends to preserve their capital and promote their safety and soundness. According to the standard terms of CPP, after participants have missed six dividend payments—consecutive or not—Treasury can exercise its right to appoint two members to the board of directors for that institution. In May 2010, the first CPP institution missed six dividend payments, but as of August 2010, Treasury had not exercised its right to appoint members to its board of directors. An additional seven institutions missed their sixth dividend payment in August 2010. Treasury officials told us that they are developing a process for establishing a pool of potential directors that Treasury could appoint on the boards of institutions that missed at least six dividend payments. They added that these potential directors will not be Treasury employees and would be appointed to represent the interests of all shareholders, not just Treasury. Treasury officials expect that any appointments will focus on banks with CPP investments of $25 million or greater, but Treasury has not ruled out making appointments for institutions with smaller CPP investments. We will continue to monitor and report on Treasury’s progress in making these appointments in future reports. Although none of the 4 institutions that have failed as of July 31, 2010, were identified as marginal cases, 39 percent of the 66 approved institutions with marginal characteristics have missed at least one CPP dividend payment, compared with 20 percent of CPP participants overall. Through August 2010, 26 of the 144 institutions that had missed at least one dividend payment were institutions identified as marginal. Of these 26 marginal approvals, 20 have missed at least two payments, and 14 have missed at least four. Several of the marginal approvals also have received formal enforcement actions since participating in CPP. As of April, regulators filed formal actions against nine of the marginal approvals, including four cease-and-desist orders and four written agreements. Seven of these institutions also missed at least one dividend payment. However, none of the approvals identified as marginal had filed for bankruptcy or were placed in FDIC receivership as of July 31, 2010. Second, since June 2009, at least 16 institutions have formally requested that Treasury restructure their CPP investments, and most of the institutions have made their requests in recent months. Specifically, as of July, 9 of the 11 requests received this year were received since April. Treasury officials said that institutions have pursued a restructuring primarily to improve the quality of their capital and attract additional capital from other investors. Treasury has completed six of the requested restructurings and entered into agreements with 2 additional institutions that made requests. According to officials, Treasury considers multiple factors in determining whether to restructure a CPP investment. These factors include the effect of the proposed capital restructuring on the institution’s Tier 1 and common equity capital and the overall economic impact on the U.S. government’s investment. The terms of the restructuring agreements most frequently involve Treasury exchanging its CPP preferred shares for either mandatory convertible preferred shares— which automatically convert to common shares if certain conditions such as the completion of a capital raising plan are met—or trust preferred securities—which are issued by a separate legal entity established by the CPP institution. Finally, the number of CPP institutions on FDIC’s list of problem banks has increased. At December 31, 2009, there were 47 CPP firms on the problem list. This number had grown to 71 firms by March 31, 2010, and to 78 at June 30, 2010. The FDIC tracks banks that it designates as problem institutions based on their composite examination ratings. Institutions designated as problem banks have financial, operational, or managerial weaknesses that threaten their continued viability and include firms with either a 4 or 5 composite rating. Reviews of regulators’ approval recommendations helped ensure consistent evaluations and mitigate risk from Treasury’s limited guidance for assessing applicants’ viability. Reviews of regulators’ recommendations to fund institutions are an important part of CPP’s internal control activities aimed at providing reasonable assurance that the program is performing as intended and accomplishing its goals. The process that Treasury and regulators implemented established centralized control mechanisms to help ensure consistency in the evaluations of approved applicants. For example, regulators established their own processes for evaluating applicants, but they generally had similar structures including initial contact and review by regional offices followed by additional centralized review at the headquarters office for approved institutions. FDIC, OTS, and the Federal Reserve conducted initial evaluations and prepared the case decision memos at regional offices (or Reserve Banks in the case of the Federal Reserve), while the regulators’ headquarters (or Board of Governors) performed secondary reviews and verification. At OCC, district offices did the initial analysis of applicants and provided a recommendation to headquarters, which prepared the case decision memo using input from the district. All of the regulators also used review panels or officials at headquarters to review the analyses and recommendations before submission to the CPP Council or Treasury. Applicants recommended for approval by regulators also received further evaluation at the CPP Council or Treasury. Regulators sent to the CPP Council applications that they had approved but that had certain characteristics identified by Treasury as warranting further review by the council. These characteristics included indications of relative weakness, such as unsatisfactory examination ratings and performance ratios. At the council, representatives from all four federal bank regulators discussed the viability of applicants and voted on recommending them to Treasury for approval. As Treasury officials explained, the CPP Council was the deliberative forum for addressing concerns about marginal applicants whose eligibility for CPP was unclear. The council’s charter describes its purpose as acting as an advisory body to Treasury for ensuring that CPP guidelines are applied effectively and consistently across bank regulators and applicants. By requiring the regulators to reach consensus when recommending applicants whose approval was not straightforward, the CPP Council helped ensure that the final outcome of applicants was informed by multiple bank regulators and generally promoted consistency in decision making. After regulators or the CPP Council submitted a recommendation to Treasury, the applicant received a final round of review by Treasury’s CPP analysts and the Investment Committee. CPP analysts conducted their own reviews of applicants and the case files forwarded from the regulators, including the case decision memos. They collected additional information for their reviews from regulators’ data systems and publicly available sources and also gathered information from regulators to clarify the analysis in the case files. According to Treasury officials, the CPP analysts were experienced bank examiners serving on detail from each of the bank regulators except OCC. Treasury officials explained that CPP analysts did not make decisions about preliminary approvals or preliminary disapprovals. Only the Investment Committee made those decisions. In the final review stage, the Investment Committee evaluated all of the applicants forwarded by regulators or the CPP Council. On the basis of its review of the regulators’ recommendations and analysis and additional information collected by Treasury CPP analysts, the Investment Committee recommended preliminary approval or denial to applicants, subject to the final decision of the Assistant Secretary for Financial Stability. By reviewing and issuing a preliminary decision on all forwarded applicants, the Investment Committee represented another important control, much like the CPP Council. Unlike the CPP Council, however, the Investment Committee deliberated on all applicants referred by regulators rather than just those meeting certain marginal criteria. The reviews by the CPP Council, analysts at OFS, and the Investment Committee were important steps to limit the risk of inconsistent evaluations by different regulators. This risk stemmed from the limited guidance that Treasury provided to regulators concerning the application review process. Specifically, the formal written guidance that Treasury initially provided to regulators consisted of broad high-level guidance, which was supplemented with other informal guidance to address specific concerns. The written guidance provided by Treasury established the institution’s strength and overall viability as the baseline criteria for the eligibility recommendation. Regulators said that while the guidance was useful in providing a broad framework or starting point for their reviews, they could not determine an applicant’s viability using Treasury’s written guidance alone. Officials from several regulators said that they also relied on regulatory experience and judgment when evaluating CPP applicants and making recommendations to Treasury. Treasury officials told us that they believed they were not in a position to provide more specific guidance to regulators on how to evaluate the viability of the institutions they oversaw. Treasury officials further explained that with many different kinds of institutions and unique considerations, regulators needed to make viability decisions on an individual basis. A 2009 audit by the Federal Reserve’s Inspector General (Fed IG) assessing the Federal Reserve’s process and controls for reviewing CPP applications similarly found that Treasury provided limited guidance in the early stages of the program regarding how to determine applicants’ viability. As a result, the Federal Reserve and other regulators developed their own procedures for analyzing CPP applications. The report also found that formal, detailed, and documented procedures would have provided the Federal Reserve with additional assurance that CPP applications would be analyzed consistently and completely. However, the multiple layers of reviews involving the regulators, the CPP Council, and Treasury staff helped compensate for the risk of inconsistent evaluation of applicants that received recommendations for CPP investments. The Fed IG recommended that the Federal Reserve incorporate lessons learned from the CPP application review process to its process for reviewing repurchase requests. The Federal Reserve generally agreed with the report’s findings and recommendations. As Treasury fully implemented its CPP process, it and the regulators compiled documentation of the analysis supporting their decisions to approve program applicants. For example, regulators consistently used a case decision memo to provide Treasury with standard documentation of their review and recommendations of CPP applicants. This document contained basic descriptive and evaluative information on all applicants forwarded by regulators, including identification numbers, examination and compliance ratings, recent and post-investment performance ratios, and a summary of the primary regulator’s evaluation and recommendation. Although the case decision memo contained standard types of information, the amount and detail of the information that regulators included in the form evolved over time. According to regulators and Treasury, they engaged in an iterative process whereby regulators included additional information after receiving feedback from Treasury on what they should describe about their assessment of an applicant’s viability. For example, regulators said that often Treasury wanted more detailed explanations for more difficult viability decisions. According to bank regulatory officials, other changes included additional discussion of specific factors relevant to the viability determination, such as information on identified weaknesses and enforcement actions, analysis of external factors such as economic and geographic influences, and consideration of nonbank parts of holding companies. Treasury officials explained that as CPP staff learned about the types of information the Investment Committee wanted to see, they would communicate it to the regulators for inclusion in case decision memos. Our review of CPP case files indicated that some case decision memos were incomplete and missing important information, but typically only for applicants approved early in the program. For instance, several case decision memos contained only one or two general statements supporting viability, largely for the initial CPP firms. Eventually, the case decision memos included several paragraphs, and some contained multiple pages, with detailed descriptions of the applicant’s condition and viability assessment. Most of the cases in which the regulator did not explain its support for an applicant’s viability occurred in the first month of the program. Some case decision memos lacked other important information, although these memos also tended to be from early in the program. For example, multiple case decision memos were missing either an overall examination rating, all of the component examination ratings, or a performance ratio related to capital levels. Most or all of those were approved prior to December 2008. Further, 104 of 567 case files we reviewed lacked examination ratings dates, and almost all of these firms were approved before the end of December 2008. Missing CRA dates, which occurred in 214 cases, exhibited a similar pattern. For applications that regulators sent to Treasury with an approval recommendation, Treasury staff used a “team analysis” form to document their review before submitting the applications to the Investment Committee for its consideration. According to Treasury officials, the team analysis evolved over time as CPP staff became more experienced and different examiners made their own modifications to the form. For example, as the CPP team grew in size, additional fields were added to document multiple levels of review by other examiners. As with the case decision memos, the consistency of information in the team analysis improved with time. For instance, team analysis documents did not include calculations of allowable investment amounts for almost 60 files that we reviewed that Treasury had approved by the end of December 2008. Finally, a small number of case files did not contain an award letter, but all of those approvals had also occurred before the end of December 2008. Treasury and regulators compiled meeting minutes for the CPP Council and Investment Committee, although they did not fully document some early Investment Committee meetings. The minutes described discussions of policy and guidance related to TARP and CPP and also the review and approval decisions for individual applicants. However, records do not exist for four meetings of the Investment Committee that occurred between October 23, 2008, and November 12, 2008. According to Treasury, no minutes exist for those meetings. We did not find any missing meeting minutes for the CPP Council, although at the early meetings, regulators did not collect the initials of voting members to document their recommendations to approve or disapprove applicants they reviewed. Within several weeks however, regulators began using the CPP Council review decision sheets to document council members’ votes in addition to the meeting minutes. Although the multiple layers of review for approved institutions enhanced the consistency of the decision process, applicants that withdrew from consideration in response to a request from their regulator received no review by Treasury or other regulators. To avoid a formal denial, regulators recommended that applicants withdraw when they were unable to recommend approval or believed that Treasury was unlikely to approve the institution. Some regulators said that they also encouraged institutions not to formally submit applications if approval appeared unlikely. Applicants could insist that the regulator forward their application to the CPP Council and ultimately to the Investment Committee for further consideration even if the regulator had recommended withdrawal. However, Treasury officials said that they did not approve any applicants that received a disapproval recommendation from their regulator or the CPP Council. Regulators also could recommend that applicants withdraw after the CPP Council or Investment Committee decided not to recommend approval of their application. One regulator stated that all the applicants it suggested withdraw did so rather than receive a formal denial. Treasury officials also said that institutions receiving a withdrawal recommendation generally withdrew and that no formal denials were issued. Almost half of all applicants withdrew from CPP consideration before regulators forwarded their applications to the CPP Council or Treasury. Regulators had recommended withdrawal in about half of these cases where information was available. Over the life of the program, regulators received almost 3,000 CPP applications, about half of which they sent to the CPP Council or directly to Treasury (see table 3). The remaining applicants withdrew either voluntarily or after receiving a recommendation to withdraw from their regulator. Three of the regulators—OCC, OTS, and the Federal Reserve—indicated that about half of their combined withdrawals were the result of their recommendations. FDIC, which was the primary regulator for most of the applicants, did not collect information on the reasons for applicants’ withdrawals. According to Treasury officials, those applicants that chose to withdraw voluntarily did so for various reasons, including uncertainty over future program requirements and increased confidence in the financial condition of banks. In addition to institutions that withdrew after applying for CPP, Treasury officials and officials from a regulator indicated that some firmsdecided not to formally apply after discussing their potential application with their regulator. However, regulators did not collect information on the number of firms deciding not to apply after having these discussions. Although applications recommended for approval received multiple reviews and were coordinated among regulators and Treasury, each regulator made its own decision on withdrawal recommendations. Most regulators conducted initial reviews of applicants at their regional offices, and staff at these offices had independent authority to recommend withdrawal for certain cases. Regulatory officials said that regional staff (including examiners and more senior officials) made initial assessments of applicants’ viability using Treasury guidelines and would recommend withdrawal for weak firms with the lowest examination ratings that were unlikely to be approved. Applicants that received withdrawal recommendations might have had weak characteristics relative to those of other firms and might have received a denial from Treasury. But following regulators’ suggestions to withdraw before referral to the CPP Council or Treasury, or to not apply, ensured that they would not receive the centralized reviews that could have mitigated any inconsistencies in their initial evaluations. Further, while regulators had panels or senior officials at their headquarters offices providing central review of approved applicants, most of the regulators allowed their regional offices to recommend withdrawal for weaker applicants or encourage such applicants not to apply, thereby limiting the benefit of that control mechanism. Allowing regional offices to recommend withdrawal without any centralized review may increase the risk of inconsistency within as well as across regulators. In its report on the processing of CPP applications, the FDIC Office of Inspector General found that one of FDIC’s regional offices suggested that three institutions withdraw from consideration that were well capitalized and technically met Treasury guidelines. Regional FDIC management cited poor bank management as the primary concern in recommending that the institutions withdraw. The report concluded that the use of discretion by regional offices in recommending that applicants withdraw increased the risk of inconsistency. The report made two recommendations to enhance controls over the process for evaluating applications: (1) forwarding applications recommended for approval that do not meet one or more of Treasury’s criteria to the CPP Council for additional review and (2) requiring headquarters review of institutions recommended for withdrawal when the institutions technically meet Treasury’s criteria. In commenting on the report, FDIC concurred with the recommendations. Treasury did not collect information on applicants that had received withdrawal recommendations from their regulators or on the reasons for these decisions. According to Treasury officials, Treasury did not receive, request, or review information on applicants that regulators recommended to withdraw and thus could not monitor the types of institutions that regulators were restricting from the program or the reasons for their decisions. The officials said that Treasury did not collect or review information on withdrawal recommendations in part to minimize the potential for external parties to influence the decision-making process. However, such considerations did not prevent Treasury from reviewing information on applicants that regulators recommended for approval, and concerns about external influence could also be addressed directly through additional control procedures rather than by limiting the ability to collect information on withdrawal recommendations. The lack of additional review outside of the individual regulator or oversight of withdrawal requests by Treasury presents the risk that applicants may not have been evaluated in a consistent fashion across regulators. As the agency responsible for implementing CPP, it is equally beneficial for Treasury to understand the reasons that regulators recommended applicants withdraw from the program as it is for Treasury to understand the reasons regulators recommended approval. Collecting and reviewing information on withdrawal requests would have served as an important control mechanism and allowed Treasury to determine whether leaving certain applicants out of CPP was consistent with program goals. It also would have allowed Treasury to determine whether similar applicants were evaluated consistently across different regulators in terms of their decisions to recommend withdrawal. Treasury has indicated that it may use the CPP model for new programs to stimulate the economy and improve conditions in financial markets, and unless corrective actions are taken, such programs may share the same increased risk of similar participants not being treated consistently. Specifically, in February 2010, Treasury announced terms for a new TARP program—the Community Development Capital Initiative (CDCI)—to invest lower-cost capital in Community Development Financial Institutions that lend to small businesses. According to Treasury and regulatory agency officials, Treasury modeled its implementation of the CDCI program after the process it used for CPP, with federal bank regulators—in this case including the National Credit Union Administration (NCUA)—conducting the initial reviews and making recommendations. The CDCI program also uses a council of regulators to review marginal approvals, and an Investment Committee at Treasury reviews all applicants recommended by regulators for approval. As in the case of CPP, control mechanisms exist for reviewing approved applicants, but no equivalent reviews are done for applicants that receive withdrawal recommendations. Thus, the CDCI structure could raise similar concerns about a lack of control mechanisms to mitigate the risk of inconsistency in evaluations by different regulators. The deadline for financial institutions to apply to participate in the CDCI was April 30, 2010, and all disbursements or exchanges of CPP securities for CDCI securities must be completed by September 30, 2010. The Small Business Jobs Act of 2010, enacted on September 27, 2010, established a new Treasury program—the Small Business Lending Fund (SBLF)—to invest up to $30 billion in small institutions to increase small business lending. Treasury may choose to model the new program’s implementation on the CPP process, as it did with the CDCI. Treasury is required to consult with the bank regulators to determine whether an institution may receive a capital investment, and Treasury officials have indicated that they would likely rely on regulators to determine applicants’ eligibility. Unless Treasury also takes steps to coordinate and monitor withdrawal requests by regulators, the disparity that existed in CPP between the control mechanisms for approved applicants and those receiving withdrawal recommendations may persist in this new program, potentially resulting in similar applicants being treated differently. Treasury relies on decisions from federal bank regulators concerning whether to allow CPP firms to repay their investments, but as with withdrawal recommendations, it does not monitor or collect information on regulators’ decisions. The CPP institution submits a repayment request to its primary federal regulator and Treasury (see fig. 5). Bank regulatory officials explained that their agencies use existing supervisory procedures generally applicable to capital reductions as a basis for reviewing CPP repurchase requests and that they approach the decision from the perspective of achieving regulatory rather than CPP goals. Following their review, regulators provide a brief e-mail notification to Treasury indicating whether they object or do not object to allowing an institution to repay its CPP investment. Treasury, in turn, communicates the regulators’ decisions to the CPP firms. As of August 2010, 109 institutions had formally requested that they be allowed to repay their CPP investments, and regulators had approved over 80 percent of the requests (see table 4). According to Treasury officials, there have been no instances where Treasury has raised concerns about a regulator’s decision. Officials at the Federal Reserve—which is responsible for reviewing most CPP repayment requests because requests for bank holding companies go to the holding company regulator— explained that they had not denied any requests but had asked institutions to wait or to raise additional capital. In these cases, institutions typically had experienced significant deterioration since the CPP investment, raising concerns about the adequacy of their capital levels. Under the original terms of CPP, Treasury prohibited institutions from repaying their funds within 3 years unless the firm had completed a qualified equity offering to replace a minimum amount of the capital. However, the American Recovery and Reinvestment Act of 2009 (ARRA) included provisions modifying the terms of CPP repayments. These provisions require that Treasury allow any institution to repay its CPP investment subject only to consultation with the appropriate federal bank regulator without considering whether the institution has replaced such funds from any other source or applying any waiting period. Treasury officials indicated that, as a result of these restrictions, they did not provide guidance or criteria to regulators. The officials explained that even before the ARRA provisions limited Treasury’s role, the standard CPP contract terms allowed institutions to repay the funds at their discretion— subject to regulatory approval—as long as they completed a qualified equity offering or the 3-year time frame had passed. The officials said that the contract terms themselves helped ensure that CPP goals were achieved. While the decision to allow repayment ultimately lies with the bank regulators, Treasury is not statutorily prohibited from reviewing their decision-making process and collecting information or providing feedback about the regulators’ decisions. The two regulators responsible for most repayment requests prepare a case decision memo to document their analysis that is similar to the memo they used to document their evaluations of CPP applicants, but Treasury and agency officials said that Treasury does not request or review the memo or other analyses supporting regulators’ decisions. One regulator indicated that it would provide Treasury with a brief explanation of the basis for its decisions to deny repayment requests and a brief discussion of the supervisory concerns raised by the proposed repayment. But Treasury officials stated that they did not review any information on the basis for regulators’ decisions to approve or deny repayment requests. Without collecting or monitoring such information, Treasury has no basis for considering whether decisions about similar institutions are being made consistently and thus whether CPP firms are being treated equitably. Furthermore, absent information on why regulators made repayment decisions, Treasury cannot provide feedback to regulators on the consistency of regulators’ decision making for similar institutions as part of its consultation role. Regulators have independently developed similar guidelines for evaluating repurchase requests and also established processes for coordinating decisions that involved multiple regulators, and Treasury officials stated that they did not provide input to these guidelines or processes. Regulators said that, in general, they considered the same types of factors when evaluating repayment requests that they considered when reviewing CPP applications. According to the officials, regulators follow existing regulatory requirements for capital reductions—including the repayment of CPP funds—that apply to all of their supervised institutions. In addition to following existing supervisory procedures, officials from the different banking agencies indicated that they also considered a broad set of similar factors, including the following: the institution’s continued viability without CPP funds; the adequacy of the institution’s capital and ability to maintain appropriate capital levels over the subsequent 1 to 2 years, even assuming worsening economic conditions; the level and composition of capital and liquidity; earnings and asset quality; and any major changes in financial condition or viability that had occurred since the institution received CPP funds. Although regulators said that they considered similar factors in their evaluations, without reviewing any information or analysis supporting regulators’ recommendations, Treasury cannot be sure that regulators are using these guidelines consistently for all repayment requests. In addition to setting out guidelines for standard repayment requests, the Federal Reserve established a supplemental process to evaluate repayment requests by the 19 largest bank holding companies that participated in the Supervisory Capital Assessment Program (SCAP). As we reported in our June 2009 review of Treasury’s implementation of TARP, the Federal Reserve required any SCAP institution seeking to repay CPP capital to demonstrate that it could access the long-term debt markets without reliance on debt guarantees by FDIC and public equity markets in addition to other factors. As of September 16, 2010, four bank holding companies that participated in SCAP had not repurchased their CPP investment and one had not repaid funds from TARP’s Automotive Industry Financing Program. Bank regulators said that they also shared their repayment process documents with each other to enhance the consistency of their evaluations and recommendations. For example, the Federal Reserve designed a repayment case decision memo that documents the review of repayment requests and the factors considered in making the decision and shared it with other regulators to promote consistency in their reviews. Officials from OTS explained that they used the Federal Reserve’s repurchase case decision memo as the framework for their document while adding certain elements specific to thrifts such as confirmation that FDIC concurrence was received for thrift holding companies with state bank subsidiaries regulated by FDIC. Bank regulatory officials also stated that bank regulators discussed the repayment process during their weekly conference calls on CPP-related topics. OCC also prepares a memo to document its review of repurchase requests that differs from the form used by the Federal Reserve and OTS; however, it contains similar elements such as an explanation of the analysis and the basis for the decision. Finally, FDIC officials said that they followed existing procedures for capital retirement applications from FDIC-supervised institutions that included safety and soundness considerations. Bank regulators also established processes for coordinating repayment decisions for CPP firms with a holding company and subsidiary bank supervised by different regulators. For example, Federal Reserve officials said that if a holding company it supervised that had a subsidiary bank under another regulator requested to repay CPP funds, the agency would consult with the subsidiary’s regulator before making a final decision. The officials stated that if the regulator of the subsidiary bank objected to the Federal Reserve’s preliminary decision, the regulators would try to reach a consensus. However, as regulator of the holding company that received the CPP investment, the Federal Reserve has the ultimate responsibility for making the decision as it is considered the primary federal regulator in such cases. According to Federal Reserve officials, when OTS is the primary regulator of a subsidiary thrift, it provides a repayment case decision memo to the Federal Reserve for it to consider as it evaluates the repayment request. OCC also provides the Federal Reserve with its analysis of any subsidiary bank for which it is the primary regulator, and FDIC identifies certain individuals who provide their recommendation and are available to discuss the decision. OTS performs a similar coordination role for CPP repayment requests that involve thrift holding companies with nonthrift financial subsidiaries. However, if Treasury does not collect information on or monitor the processes regulators use to make their repayment decisions, Treasury cannot provide any feedback to regulators on the extent to which they are coordinating their decisions. Approved CPP applicants generally had similar examination ratings and other strength characteristics that exceeded guidelines. However, a smaller group of firms had weaker characteristics and were approved after consideration of mitigating factors by regulators and Treasury. The ability to approve institutions after consideration of mitigating factors illustrates the importance of including controls in the review and selection process to provide reasonable assurance of the achievement of program goals and consistent decision making. While Treasury established such controls for applicants that regulators recommended for approval, Treasury’s process was inconsistent in the control mechanisms that existed for applicants that regulators recommended to withdraw from program consideration. These institutions did not benefit from the multiple levels of review that Treasury and regulators applied to approved applicants. For example, regulators could decide independently which applicants they would recommend to withdraw and may have considered mitigating factors differently. Treasury did not collect information on these firms or the reasons for regulators’ decisions. Without mechanisms such as those that exist for approved applicants to control for the risk of inconsistent evaluations across different regulators, Treasury cannot have reasonable assurance that all similar applicants were treated consistently or that some potentially eligible firms did not end up withdrawing after following the advice of their regulator. Treasury officials explained their desire to conduct adequate due diligence on all applicants recommended for approval, but as Treasury is the agency responsible for implementing CPP, understanding the reasons that regulators recommended applicants withdraw would have been equally beneficial for Treasury. Collecting and reviewing information on withdrawal requests would allow Treasury to determine whether applicants that were left out of CPP were evaluated consistently across different regulators and conformed to Treasury’s goals for the program. Although Treasury is no longer making investments in financial institutions through CPP, it may continue to use the process as a model for similar programs as it has for the CDCI program. One such program is the SBLF, which Congress authorized in September 2010. SBLF contains elements similar to those of CPP and requires Treasury to administer the program with bank regulators. Unless Treasury makes changes to the CPP model to include monitoring and reviews of withdrawal recommendations, these new programs may share the same increased risk of similar participants not being treated consistently that existed in CPP. As with the approval process, agencies are expected to establish control mechanisms to provide reasonable assurance that program goals are being achieved. Treasury has not established mechanisms to monitor, review, or coordinate regulators’ decisions on repayment requests because, in its view, it lacks the authority to do so and is limited to carrying out regulators’ decisions regarding the institution making the request. However, Treasury is not precluded from providing feedback to help ensure that regulators are treating similar institutions consistently when considering their repayment requests. Although regulators said that they consider similar factors when evaluating CPP firms’ repayment requests, without collecting information on how and why regulators made their decisions, Treasury cannot verify the degree to which regulators’ decisions on requests to exit CPP actually were based on such factors. If Treasury administers programs containing elements similar to those of CPP, such as the SBLF, we recommend that Treasury apply lessons learned from the implementation of CPP and enhance procedural controls for addressing the risk of inconsistency in regulators’ decisions on withdrawals. Specifically, we recommend that the Secretary of the Treasury direct the program office responsible for implementing SBLF to establish a process for collecting information from bank regulators on all applicants that withdraw from consideration in response to a regulator’s recommendation, including the reasons behind the recommendation. We also recommend that the program office evaluate the information to identify trends or patterns that may indicate whether similar applicants were treated inconsistently across different regulators and take action, if necessary, to help ensure a more consistent treatment. As part of its consultation with regulators on their decisions to allow institutions to repay their CPP investments to Treasury, and to improve monitoring of these decisions, we recommend that the Secretary of the Treasury direct OFS to periodically collect and review certain information from the bank regulators on the analysis and conclusions supporting their decisions on CPP repayment requests and provide feedback for the regulators’ consideration on the extent to which regulators are evaluating similar institutions consistently. We provided a full draft of this report to Treasury for its review and comment. We received written comments from the Assistant Secretary for Financial Stability. These comments are summarized below and reprinted in appendix III. In addition, we received technical comments on this draft from the Federal Reserve, FDIC, OCC, and Treasury, which we incorporated as appropriate. In its written comments, Treasury agreed to consider our recommendation to review information on applicants that regulators recommend to withdraw from program consideration if Treasury implements a similar program in the future. Treasury stated that the system used to evaluate CPP applicants balanced the objectives of ensuring consistent treatment for all applicants while also utilizing the independent judgment of federal banking regulators. Treasury suggested that ensuring regulators hold regular discussions about their standards could be an additional action to help ensure consistency in regulators’ reviews. As we note in the report, Treasury implemented multiple layers of review for approved institutions to enhance the consistency of the decision process. However, applicants that withdrew from consideration in response to a request from their regulator received no review by Treasury or other regulators. Although CPP is no longer making any new investments, the passage of the SBLF, which, according to Treasury officials, would also rely on regulators to determine applicants’ eligibility, presents an opportunity for Treasury to address this area of concern. We continue to believe that unless Treasury takes steps to monitor and provide feedback on regulators’ withdrawal requests, applicants that receive withdrawal recommendations under this new program may not be treated consistently and equitably. Treasury stated that our second recommendation—to review information on regulators’ decisions on repayment requests and provide feedback to regulators—also raises questions about how to balance the goals of consistency and respect for the independence of regulators. However, Treasury acknowledged the potential value of our recommendation and agreed to consider ways to address it in a manner consistent with these considerations. Specifically, Treasury noted that while it is prohibited from imposing standards for repayment as a result of statutory changes to its authority under EESA, it did help facilitate meetings among regulators to discuss when CPP participants would be allowed to repay their investments. Finally, Treasury explained that it does not receive confidential supervisory information about CPP participants on a regular basis, which could limit any information collection envisioned by our recommendation. However, as we noted in the report, the two regulators with responsibility for most CPP repayment requests document their analysis in a manner similar to what regulators provided to Treasury when recommending CPP applicants, but Treasury does not review this information. We are sending copies of this report to the Congressional Oversight Panel, Financial Stability Oversight Board, Special Inspector General for TARP, interested congressional committees and members, Treasury, the federal banking regulators, and others. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at [email protected] or (202) 512-8678. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The objectives of our report were to (1) describe the characteristics of financial institutions that received funding under the Capital Purchase Program (CPP), and (2) assess how the Department of the Treasury (Treasury), with the assistance of federal bank regulators, implemented CPP. To describe the characteristics of financial institutions that received CPP funding, we reviewed and analyzed information from Treasury case files on all of the 567 institutions that received CPP investments through April 30, 2009. We gathered information from the case files using a data collection survey that recorded our responses in a database. Multiple analysts reviewed the collected information, and we performed data quality control checks to verify its accuracy. We used the database to analyze the characteristics of CPP applicants including their supervisory examination ratings, financial performance ratios, and regulators’ assessments of their viability, among other things. We spoke with Treasury and regulatory officials about their processes for evaluating applicants, in particular about actions they took to collect up-to-date information on firms’ financial condition. We also collected and analyzed information from the records of the CPP Council and Investment Committee meetings to understand how the committees evaluated and recommended approval of CPP applicants. Additionally, we collected limited updated information on all CPP institutions approved through December 31, 2009—for example, their location, primary federal regulator, ownership type, and CPP investment amount—from Treasury’s Office of Financial Stability (OFS) and from publicly available reports on OFS’s Web site to present characteristics for all approved institutions. To describe how Treasury and regulators assessed firms with weaker characteristics, we collected information on the reasons regulators approved these firms and the concerns regulators raised about their eligibility from case files and records of committee meetings. To describe enforcement actions that regulators took against these institutions, we reviewed publicly available documents on formal enforcement actions from federal bank regulators’ Web sites. We also collected information on CPP firms that missed their dividend or interest payments or restructured their CPP investments from OFS and publicly available reports on its Web site. Finally, we collected information from the Federal Deposit Insurance Corporation (FDIC) on the number of CPP firms added to its list of problem banks. To assess how Treasury implemented CPP with the assistance of federal bank regulators, we reviewed Treasury’s policies, procedures, and guidance related to CPP, including nonpublic documents and publicly available material from the OFS Web site. We met with OFS officials to discuss how they evaluated applications and repayment requests and coordinated with regulators to decide on these applications and requests. We interviewed officials from FDIC, the Office of the Comptroller of the Currency (OCC), Office of Thrift Supervision (OTS), and the Board of Governors of the Federal Reserve System (Federal Reserve) to obtain information on their processes for reviewing and providing recommendations on CPP applications and repayment requests. We also discussed the guidance and communication they received from Treasury and their methods of formulating their CPP procedures. Additionally, we collected and analyzed program documents from the bank regulators, including policies and procedures, guidance documents, and summaries of their evaluations of applications and repayment requests. We also gathered data from regulators on applicants that withdrew from CPP consideration—including the reason for withdrawing—and on the number of repayment requests and their outcomes. We reviewed relevant laws, such as the Emergency Economic Stabilization Act of 2008 and the American Recovery and Reinvestment Act of 2009, to determine the impact of statutory changes to Treasury’s authority. To assess how Treasury and regulators documented their decisions to approve CPP applicants, we analyzed information from case files and CPP Council and Investment Committee meeting minutes to identify how consistently Treasury and regulators included relevant records of their reviews and decision-making processes. We also discussed with Treasury and regulatory officials the key forms they used to document their decisions and the evolution of these forms over time. To assess Treasury programs that were modeled after CPP, we collected and reviewed publicly available documents from Treasury and interviewed Treasury officials to discuss the nature of these programs—including the Community Development Capital Initiative (CDCI) and Small Business Lending Fund (SBLF)—and plans for implementing them. Finally, we met with the Federal Reserve’s Office of Inspector General to learn about its work examining the Federal Reserve’s CPP process and reviewed its report and other reports by GAO, the Special Inspector General for the Troubled Asset Relief Program (SIGTARP), and the FDIC Office of Inspector General. This report is part of our coordinated work with SIGTARP and the inspectors general of the federal banking agencies to oversee TARP and CPP. The offices of the inspectors general of FDIC, Federal Reserve, and Treasury and SIGTARP have all completed work or have work under way reviewing CPP’s implementation at their respective agencies. In coordination with the other oversight agencies and offices and to avoid duplication, we primarily focused our audit work (including our review of agency case files) on the phases of the CPP process from the point at which the regulators transmitted their recommendations to Treasury. We conducted this performance audit from May 2009 to September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In general, the time frame for the Department of the Treasury and regulators to complete the evaluation and funding process for Capital Purchase Program applicants increased based on three factors. First, smaller institutions had longer processing time frames than larger firms. The average number of days between a firm’s application date and the completion of the CPP investment increased steadily based on the firm’s size as measured by its risk-weighted assets. The smallest 25 percent of firms we reviewed had an average processing time of 100 days followed by 83 days for the next largest 25 percent of firms. The two largest quartiles of firms had average processing times of 72 days and 53 days respectively. Also, it took longer to complete the investment for smaller firms, as the average time between preliminary approval and disbursement increased as the institution size decreased. Second, private institutions took longer for Treasury and regulators to process than public firms. The average and median processing time frames from application through disbursement of funds was about 6 weeks longer for private firms than for public firms. As with the trend for smaller institutions, private institutions had longer average time frames between preliminary approval and disbursement. Third, when Treasury returned an application to regulators for additional review, it took an average of about 2 weeks to receive a response from regulators. On average, Treasury preliminarily approved these applicants after an additional 3 days of review. Firms that applied earlier had shorter average processing times—from application to disbursement—than firms that applied in later months. The average time from application through disbursement was 70 days for firms that applied in October, 82 days for firms that applied in November, and 89 for those that applied in December. Also, public firms tended to apply earlier than private firms and larger firms tended to apply earlier than smaller firms. For example, 62 percent of firms that applied in October were public, while 93 percent of firms that applied in December were private—a trend that largely resulted from the later release of program term sheets for the privately held banks. Likewise, 61 percent of firms that applied in October were the largest firms and 84 percent of firms that applied in December were the smallest firms. Because larger firms and public firms also had shorter average processing time frames than smaller and private firms, this may explain why firms that applied earlier had shorter processing times than those that applied later in the program. The overall process for most firms, from when they applied to when they received their CPP funds, took 2 1/2 months. There were many interim steps within this broad process that can shorten or lengthen the overall time frame. For example, in our June 2009 report on the status of Treasury’s implementation of the Troubled Asset Relief Program, we reported that the average processing days from application to submission to Treasury varied among the different regulators from 28 days to 57 days. Also, Treasury preliminarily approved most firms within 5 weeks from application. The Investment Committee approved most firms the same day it reviewed them; however, it generally took longer to approve firms with the lowest examination ratings, resulting in a longer average review time frame. As previously mentioned, firms that Treasury returned to regulators for additional review took longer to receive Treasury’s preliminary approval, and these firms tended to be those with lower examination ratings. Once Treasury preliminarily approved an applicant, it took an average of 33 days to complete the investment. As with the trends for the overall processing time frames, the final investment closing and disbursement took longer for smaller institutions and private institutions. Daniel Garcia-Diaz (Assistant Director), Kevin Averyt, William Bates, Richard Bulman, Emily Chalmers, William Chatlos, Rachel DeMarcus, M’Baye Diagne, Joe Hunter, Elizabeth Jimenez, Rob Lee, Matthew McDonald, Marc Molino, Bob Pollard, Steve Ruszczyk, and Maria Soriano made important contributions to this report.
Congress created the Troubled Asset Relief Program (TARP) to restore liquidity and stability in the financial system. The Department of the Treasury (Treasury), among other actions, established the Capital Purchase Program (CPP) as its primary initiative to accomplish these goals by making capital investments in eligible financial institutions. This report examines (1) the characteristics of financial institutions that received CPP funding and (2) how Treasury implemented CPP with the assistance of federal bank regulators. GAO analyzed data obtained from Treasury case files, reviewed program documents, and interviewed officials from Treasury and federal bank regulators.
You are an expert at summarizing long articles. Proceed to summarize the following text: The Energy Policy Act of 1992 (EPAct) provides TVA with certain protections from competition. Additionally, under the TVA Act of 1933 (TVA Act), as amended, TVA is not subject to most of the regulatory and oversight requirements that must be satisfied by commercial electric utilities; instead, all authority to run and operate TVA is vested in its three- member board of directors. In 1959, the Congress amended the TVA Act by establishing what is commonly referred to as the TVA “fence,” which prohibits TVA—with some exceptions—from entering into contracts to sell power outside the service area that TVA and its distributors were serving on July 1, 1957. Under EPAct, TVA is exempt from having to allow other utilities to use its transmission lines to transmit power to customers within TVA’s service area. This legislative framework generally insulates TVA from direct wholesale competition and, as a result, TVA remains in a position similar to a regulated utility monopoly. However, TVA is still subject to some forms of indirect competition. For example, TVA has no protection against its industrial customers relocating outside its service area or businesses deciding not to move to its service area for reasons related to the cost of power. In addition, customers can decide to generate their own power. Accordingly, TVA is currently subject to some competitive pressures. EPAct’s requirement that utilities make their transmission lines accessible to other utilities to transmit (wheel) wholesale electricity has enabled wholesale customers to obtain electricity from a variety of competing suppliers and has resulted in increased wholesale competition in the electric utility industry across the United States. This requirement does not apply to TVA if the power is going to be consumed within its service territory. Most of TVA’s sales are wholesale because they are to its power distributors. In addition, continuing deregulation efforts in some states have led to competition at the retail level. Industry experts expect that retail deregulation will continue to occur on a state-by-state basis over the next several years. As this occurs, industrial, commercial, and, ultimately, residential consumers will be able to choose their power supplier from among several competitors rather than from one utility monopoly, as is now the case for long distance telephone service and cellular phones. Because EPAct exempts TVA from having power wheeled to consumers in its territory, TVA has not been directly impacted by the ongoing deregulation of the electric utility industry to the same extent as other utilities. However, if TVA were to lose its exemption from the wheeling provisions of EPAct, its customers would have the option of obtaining their power from other sources after the expiration of their contracts. Under legislation proposed by the administration to promote retail competition in the electric power industry, which TVA supports, TVA's exemption from the wheeling provisions of EPAct would be eliminated after January 1, 2003. If the legislation is enacted, TVA may be required to use its transmission lines to transmit the power of other utilities for consumption within TVA's service territory. In addition, the proposed legislation would remove the statutory restrictions that prevent TVA from selling power outside its service territory. Most of TVA’s power is sold to municipal and cooperative power distributors who would be directly affected in the future by retail competition through their customers’ ability to choose alternate power suppliers. Further, deregulation and the possibility of TVA losing its legislative protections have made many of TVA’s customers more aware of price differences among utilities, raised expectations of lower prices, and increased demands for more competitive pricing. Because of these ongoing deregulation efforts, TVA management, like many industry experts, anticipates that TVA may lose its legislative protections in the future. Even if TVA does not lose its legislative protections, TVA’s management has recognized the need to take action to better position TVA to be competitive in an era of increasing competition and customer choice and, in July 1997, issued a 10-year business plan with that goal in mind. TVA established a 10-year horizon for implementing the key changes outlined in the plan largely because TVA officials expect to be facing greater competitive pressures within that time frame and many of its long- term contracts with customers could begin to expire in 2007. The published plan, which formed the basis of our evaluation, contains three strategic objectives: reducing TVA’s cost of power in order to be in a position to offer competitively priced power in 2007, increasing financial flexibility by reducing fixed costs, and building customer allegiance. In developing the 10-year plan, TVA set several goals and made certain assumptions about the future. These goals and assumptions are that the future market price of wholesale power will be 3.4 to 3.5 cents per kilowatthour (kWh) by 2007; annual growth in demand through 2007 will average 2 percent; fuel costs will increase 1.7 percent annually through 2007; improvements in supply chain management will save $50 million annually; TVA’s labor force will be reduced and additional cost savings will be achieved through the creation of shared services and other initiatives; debt will be reduced by about one-half to about $14 billion, and the balance of deferred assets will be reduced from $8.5 billion to $500 million—TVA’s estimated net realizable value of these assets; capital expenditures will be limited to about $600 million annually and increases in demand through 2007 will be met primarily through purchased power; $200 million will be saved annually through cost improvement initiatives primarily related to refinancing Federal Financing Bank (FFB) and public bond debt, pursuing changes to its retirement plan, and improving business processes; revenues from power sales will be increased by about $325 million annually by implementing a rate increase in 1998 and maintaining it through 2007; and customer relations will improve through new contract and pricing options. To implement the 10-year plan, TVA has developed action plans and has linked the goals and objectives of the 10-year plan to its corporate and business unit goals. For example, one of TVA’s corporate goals is to lower costs; one of the 10-year plan’s strategic objectives is to increase financial flexibility by reducing fixed costs; and the Fossil and Hydro Power business unit’s business plan includes a unit goal of maximizing net return by reducing fixed and variable costs. However, TVA has not yet completed the process of developing performance measures to provide accountability. TVA expects to develop these performance measures later in 1999, business units will be expected to meet performance goals in 2000, and unit managers and TVA executives are expected to be held directly accountable through the use of compensation incentives in 2001. We evaluated the three strategic objectives of TVA’s plan and the underlying goals and assumptions for reasonableness, achievability, and completeness. As agreed with your offices, we did not (1) assess whether achieving the objectives of the plan would ensure TVA’s future competitiveness or (2) develop independent estimates of key elements of the plan, such as the future market price of power. We relied on comparisons of past performance to future projections, the opinions of industry experts, and economic forecasts made by knowledgeable sources to determine whether the individual components of the plan and the plan as a whole were achievable or reasonable. Additional information on our objectives, scope, and methodology is contained in appendix I. We conducted our review from June 1998 through April 1999 in accordance with generally accepted government auditing standards. We provided a draft of this report to TVA for comment. While generally agreeing with the report’s contents, TVA did provide oral and written comments, which we have incorporated, as appropriate. TVA’s written comments are reproduced in appendix II. Implementation of the 10-year plan is moving TVA in the right direction and addresses important issues facing TVA: its high fixed financing costs and limited financial flexibility to respond to competitive pressure and the large amount of deferred assets that have not been recovered through rates. These deferred assets, which totaled about $8.5 billion as of the beginning of the plan period, are primarily the result of investments made since the 1970s in nuclear generating plants that were never put into production. This helped contribute to TVA’s large debt, which totaled about $27 billion as of September 30, 1998, and resultant high fixed financing costs. TVA’s ability to meet its strategic objective of being in a position to offer competitively priced power by 2007 and to improve its financial flexibility hinges largely on its being able to meet its goal of reducing debt by about one-half—to about $14 billion—by 2007. While not specifically stated in the plan, TVA also plans to recover through rates all but $500 million of its deferred asset costs by the end of the period covered by the plan. These issues were highlighted in reports we issued in 1995 and 1997, in which we stated that TVA’s annual financing costs and deferred assets were substantially greater than those of the utilities with which TVA would most likely have to compete. We also reported that these high fixed costs and deferred assets would limit TVA’s flexibility to adjust its rates in a competitive environment. TVA, through its 10-year plan, is taking steps to address these issues. Other utilities are taking similar actions to prepare for competition. For example, utilities we previously identified as those most likely to compete with TVA are also taking steps to refinance debt at lower interest rates and accelerate recovery of the costs of their regulatory assets. However, as we reported in 1995 and 1997, these other utilities generally have fewer financing costs and deferred assets than TVA, giving them more flexibility to respond to changing market conditions. To the extent TVA recovers the costs of its deferred assets and increases its financial flexibility, it will increase its ability to adjust rates as necessary to meet changing market conditions. TVA’s focus on these areas before the full advent of competition is key to its chances of being competitive without legislative protections. The 10-year plan includes costs that correspond to those incurred in prior years and to those reported by other utilities. In addition, the plan considers costs for Year 2000 compliance and likely environmental expenditures under existing Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) and Resource Conservation and Recovery Act (RCRA) regulations. However, the plan does not include certain major costs. Specifically, the plan does not include the following: The capital costs of additional generating capacity that may be acquired to meet growth in demand for power. The plan assumes that TVA would meet the increasing demand for power over the plan period by purchasing power from other utilities. The costs of the power purchases are reflected as operating costs in the 10-year plan. The cost of complying with new environmental regulations. The cost of nonpower programs that, to date, have been funded primarily through appropriations. These appropriations, which amounted to $70 million in fiscal year 1998, are expected to be substantially reduced or discontinued beginning in fiscal year 2000. By not including these costs, TVA will have less cash than contemplated in the plan to pay down debt and reduce fixed costs, which could jeopardize full achievement of the plan’s objectives. TVA estimates that the demand for peaking power in its service territory through 2007 will exceed its current and planned generating capacity. TVA currently has several options planned or underway to meet a portion of this excess demand, including (1) purchasing new gas-fired combustion turbines, (2) purchasing power that was already under contract when the 10-year plan was issued, (3) modernizing hydro facilities, (4) improving the efficiency of certain existing fossil plants and combustion turbines, (5) contracting for the power from a new lignite plant, (6) upgrading certain nuclear plants, and (7) issuing a request for proposal for purchasing power generated from renewable resources. TVA projected that these measures would not be sufficient to meet the entire increase in demand, and the 10-year plan assumes that TVA will purchase power from other utilities to make up the difference, which is inconsistent with prior year practices. However, since the plan was finalized, TVA officials have told us that they plan to evaluate other power supply options and to invest in new capacity if the resulting long-term increase in costs to produce power (interest and operating expense) would ultimately be less than the cost of purchased power. TVA has already decided to invest in new capacity rather than purchasing power in at least one case—in 1998, TVA announced plans to purchase eight gas-fired combustion turbine units that will be used to replace a like amount of purchased peaking power that was assumed in the original plan. According to TVA officials, while they expect this decision to result in a positive cash flow by fiscal year 2010, the decision to purchase these units will require about $65 million more in cash disbursements through 2007 than would have been necessary to purchase a comparable amount of power from other utilities. But, according to TVA’s analysis, while acquiring this new generating capacity in lieu of purchasing power will initially increase capital expenditures and thus reduce the amount of cash available to pay down debt, it will also decrease TVA’s annual cost of power because it will be less expensive for TVA to operate this new equipment than to purchase a like amount of power from other utilities. Decreasing the cost of power should, in the long term, improve TVA’s ability to meet its ultimate objective of offering competitively priced power. In addition, purchasing new generating capacity provides the added benefit of removing the uncertainty of having to rely on another utility for power. Based on our discussions with TVA officials, while it may make economic sense in the long term, additional decisions to increase capacity in lieu of purchasing power from other utilities will likely further reduce TVA’s cash available for debt reduction through 2007, thus jeopardizing its ability to fully meet the plan’s debt reduction goals by 2007. The 10-year plan does not include estimated costs of complying with recent and proposed environmental regulations because TVA did not believe the costs were estimable at the time the plan was developed. Since that time, some of these costs have become estimable. In October 1998, the Environmental Protection Agency (EPA) issued a regulation requiring states to develop plans to reduce nitrogen oxide emissions. TVA now estimates that it could spend about $500 million to $600 million for capital modifications to its fossil plants to comply with state plans that would be implemented under this regulation, which is commonly referred to as the NOx SIP Call. The time frame for TVA’s compliance with the states’ plans is 2003, within the scope of the 10-year plan. In October 1998, EPA also issued a proposed regulation regarding regional haze, which EPA expects to be put into effect during the life of the plan but for which EPA does not expect compliance until after 2004. TVA has estimated that this regulation could require capital expenditures of about $450 million to $500 million. It is likely that at least a portion of these costs will be incurred during the time frame of the 10-year plan. Additionally, all of the estimated $500 million to $600 million in costs related to the NOx SIP Call will be incurred during the plan time frame and, thus, will negatively impact TVA’s ability to meet its cost reduction goals. However, as discussed later, TVA officials told us that they still believe TVA will be in a position to offer competitively priced power in 2007 because these same types of costs will be incurred by many other power suppliers and therefore would tend to increase the future market price of power. The plan does not include the costs of nonpower programs that historically have been funded through appropriations but now are likely to be funded through power revenues. The plan assumes that TVA will continue to receive appropriations for its nonpower programs, such as flood control and navigation. While this assumption was reasonable when the plan was developed, TVA’s nonpower appropriations have been sharply curtailed in recent years, from $109 million in fiscal year 1996 to only $7 million in TVA’s budget request for fiscal year 2000. TVA officials have indicated publicly that future appropriations for nonpower programs are likely to be eliminated or substantially reduced and, in accordance with the fiscal year 1998 Energy and Water Development Appropriations Act, have indicated they will use power revenues to continue these nonpower activities. These costs totaled approximately $70 million in fiscal year 1998 and are expected to range from about $50 million to $60 million annually in the future. Since funding nonpower activities with power revenues was not assumed in the 10-year plan, these costs will further reduce the cash available to reduce debt to the level envisioned in the plan. We assessed 10 goals and assumptions TVA made about the future in developing the 10-year plan. Based on economic forecasts, comparisons with TVA’s results of past operations, and the opinions of industry experts, we concluded that seven of the goals and assumptions were achievable or reasonable, two were unachievable, and one was uncertain. The goals and assumptions we assessed, and our conclusions about each, are summarized in table 1 and discussed in detail in the following sections. TVA’s assumption about the future market price of wholesale power is important to the success of the plan because it establishes a target that TVA must achieve in order to offer what it considers to be competitively priced power in 2007. TVA estimated that the price of wholesale power in 2007 would fall between 3.0 cents to 3.7 cents per kWh, with its best estimate being 3.4 to 3.5 cents per kWh. The Energy Information Administration (EIA) within the Department of Energy (DOE) estimated that the price of wholesale power in 2007 would be 3.69 cents per kWh, while Standard and Poor’s DRI estimated that it would be 3.91 cents per kWh. The combined range of EIA and DRI estimates was 3.57 cents to 4.35 cents per kWh. Since TVA’s projection of the future market price of power in the 10-year plan is lower, TVA is forced to be aggressive in pursuing its options to reduce costs and increase revenue. TVA officials said that if they were to prepare the 10-year plan today, their projection for the market price of wholesale power in 2007 would increase to between 3.5 and 3.8 cents per kWh, due primarily to new environmental regulations. TVA officials stated that the new environmental regulations would likely drive up the market price of power and affect many utilities similarly. Any upward revision in the projected price of wholesale power in 2007 would have a positive impact on TVA’s ability to achieve the objectives of the plan and would help offset some of the previously identified costs that are not currently considered in the plan—specifically, costs for the new environmental regulations. However, the extent to which new environmental regulations affect any utility depends on the type and condition of its generating equipment, the portion of its power generated by coal, and the types of controls it chooses to meet the new environmental regulations. Although, in aggregate, the mix of generating plants among investor-owned utilities in the states that border on TVA’s service territory is similar to its own, TVA and these utilities will not necessarily all be affected equally, depending on the condition of their equipment and the compliance options they choose. Therefore, the relative impact of the new and proposed environmental regulations on TVA, its neighboring utilities, and the market price of power is uncertain. The 10-year plan assumes that the increase in demand for power in TVA’s service region will average 2 percent per year over the plan period. While TVA’s recent historical increase in demand for power has averaged over 3 percent annually, TVA officials were conservative in this regard because they do not expect this level of growth in demand to continue. We obtained other estimates of the increase in demand for power in TVA’s geographic area from EIA, DRI, and ICF Kaiser Consulting Group, an organization hired by the Edison Electric Institute (EEI), an industry group for investor- owned utilities, to analyze TVA’s 10-year plan. Their estimates of growth in demand ranged from 1.7 percent to 2.5 percent. TVA’s assumption about growth in demand for power is reasonable based on this range of estimates established by industry experts. The 10-year plan assumes that TVA’s fuel cost, including its mix of both nuclear and coal as a fuel source, will increase 1.7 percent annually over the plan period. We obtained a cost increase estimate of 1.4 percent annually from EIA, which was based on a blended coal and nuclear fuel mix. We also obtained a cost increase estimate of 2.2 percent annually from DRI, which was based on using only coal as a fuel. Based on the range of these estimates, TVA’s assumption about fuel costs is reasonable. To control fuel costs, TVA officials stated that they competitively bid all coal contracts, use a cost model to determine which type of coal to purchase, and have reduced inventories to save carrying costs. These fuel- handling initiatives are expected to reduce fuel expense by $1.6 million per year. In addition, TVA has expanded its by-product program and expects revenue from this program to be over $5 million per year. TVA’s efforts to control these costs are positive steps toward the plan’s cost reduction goals. The 10-year plan assumes that improvements made to supply chain management will save, on average, $50 million per year over the 10 years covered by the plan. And, by expanding its supply chain management efforts in the future, TVA officials believe that they can increase efficiency, save money, and maintain quality. For example, through contract management improvements, TVA expects to realize cost savings by consolidating its blanket purchasing contracts, reducing the number of small purchase orders, and renegotiating the terms and conditions of its purchases. From the publication of the 10-year plan in July 1997 through September 1998, TVA had documented savings of about $75 million, some of which represents categories of savings that should occur on a monthly basis. The balance represents savings on individual purchases and other procurement initiatives, some of which may also recur. As TVA implements additional supply chain management initiatives and applies lessons learned from industry and individual plants to other TVA functions, supply chain savings are expected to increase. For the first 6 months of fiscal year 1999, TVA documented savings of about $37 million, or about $6.2 million per month. Of the $6.2 million, about $4.9 million should recur monthly. On an annual basis, TVA’s supply chain savings are therefore likely to be at least $59 million, making this goal achievable. The 10-year plan assumed that TVA would reduce its labor costs by reducing its labor force size from 14,960 at June 30, 1997, to 14,275 by September 30, 1997. Although TVA did not achieve this staffing level by September 30, 1997, it had reduced staff to 14,194 by December 31, 1997, and to 13,818 by September 30, 1998. Since TVA has exceeded its labor force reduction goal, the corresponding cost savings will be greater than originally anticipated. In addition, TVA has taken or planned a number of other actions that will further help reduce labor costs, including negotiating compensation levels with one of its large unions, which TVA expects will help to curtail the rise in future labor costs, replacing higher paid employees with lower paid employees as its aging implementing a “shared services” concept, which involves consolidating similar operations and reducing duplicative efforts. Although TVA did not quantify the dollar savings it expects through its labor initiatives, TVA’s current efforts in this area should help it reduce costs. The 10-year plan calls for reducing debt by about one-half to about $14 billion by 2007. This reduction, in turn, would lower TVA’s annual interest costs by half—from about $2 billion in 1997 to about $1 billion in 2007. The additional cash that is made available as debt is paid down and interest costs are reduced can be used to further reduce debt. This interrelationship is integral to meeting the debt reduction goal. In addition to reducing interest costs by reducing debt, TVA is pursuing other interest savings by refinancing outstanding debt, as discussed later in this report. TVA’s ability to meet its strategic objective of being in a position to offer competitively priced power by 2007 depends, to a large extent, on meeting its debt reduction goal. The plan calls for the cash flow needed to achieve this debt reduction to be provided by a combination of planned revenue enhancements, cost savings initiatives, and capital expenditure limitations. However, as discussed previously, the plan excluded additional capital costs related to investing in new generating capacity to meet growth in demand for power, complying with new environmental regulations, and funding nonpower programs that were previously funded through appropriations. As shown in figure 1, TVA exceeded its debt reduction goals for the first 2 years of the plan but does not expect to meet its original estimates for the remaining years due to the additional capital expenditures for new generating capacity and environmental regulations discussed previously. As a result of changes in certain of its cost estimates, TVA now does not expect to reduce debt by one-half until fiscal year 2009, about 2 years after the plan’s original target date. This revised goal is reflected in TVA’s fiscal year 2000 federal budget request. TVA’s original and revised debt reduction timetable is shown in figure 2. TVA’s planned revenue enhancements and cost savings were also intended to provide TVA with the opportunity to recover a portion of the cost of its deferred assets. As noted previously, TVA expects to recover all but about $500 million–its estimated net realizable value–of the deferred assets. However, TVA’s ability to include the costs of these assets in its rates without further rate increases is directly related to its ability to meet the plan’s revenue and cost savings targets. To the extent TVA does not recover the cost of its deferred assets while it is legislatively protected from competition, competitive pressures could prevent it from selling power at rates sufficient to recover the cost of these assets indefinitely. The plan assumes that capital expenditures will be limited to about $600 million per year and excludes any capital costs for increasing generating capacity and complying with new environmental regulations. However, as discussed previously, known environmental costs alone are an estimated $500 million to $600 million. In addition, costs for complying with a proposed environmental regulation that is likely to be implemented within the plan period could amount to another $450 million to $500 million, some of which would be incurred before 2007. Also, the costs for meeting growth in demand for power with additional generating capacity, which are not fully estimable at this time, could further increase TVA’s required capital expenditures within the period covered by the 10- year plan. Even though upward revisions in TVA’s projected market price of wholesale power could offset some of these additional costs, TVA is likely to exceed its annual $600 million planned capital expenditures limit, thus making this goal unachievable. The 10-year plan calls for TVA to undertake cost improvement initiatives that are assumed to save about $200 million a year over the life of the plan. These initiatives include refinancing TVA’s Federal Financing Bank (FFB) debt, refinancing and replacing other debt at lower interest rates, changing retirement benefits, and improving business processes. Overall, the goals related to these initiatives are achievable. To achieve a large portion of the $200 million annual cost improvement initiatives, the plan called for TVA to obtain authority from the Congress to prepay, without penalty, the $3.2 billion that TVA then owed FFB, then to refinance that debt at lower interest rates. TVA received that authority in the fiscal year 1999 Treasury and General Government Appropriations Act. TVA refinanced the FFB debt with $2.7 billion of long-term bonds having an average interest rate of 5.37 percent compared to the original 9.67 percent FFB debt, plus $469 million of short-term debt which, as of April 1999, had a current interest rate of about 4.8 percent. Based on the actual interest rates of the refinanced FFB debt, we estimate that the interest savings will total about $1 billion through 2007, providing an average annual savings of about $116 million toward the $200 million plan goal. In addition to reducing interest by refinancing the FFB debt, the plan calls for reducing annual interest costs by refinancing a portion of the $24 billion in outstanding publicly held debt and replacing maturing debt, as needed, with lower interest rate borrowings. Since the plan was issued, TVA has refinanced about $6 billion of long-term public bonds that had an average interest rate of 6.96 percent with long-term bonds having an average interest rate of 6.00 percent and $699 million of short-term borrowings that had about a 4.8 percent interest rate as of April 1999. We estimate that these actions will save an average of $44 million in annual interest expense through 2007. TVA may have further opportunities to refinance additional long-term public bonds at favorable rates since as of April 1, 1999, about $11 billion of TVA’s outstanding long-term public debt had interest rates higher than TVA’s estimated 6.55 percent borrowing rate. Of the $11 billion, $6.3 billion is callable during the plan period; however, none was callable as of April 1, 1999. According to TVA officials, another $20 million to $25 million a year will be saved by changes made to TVA’s retirement plan. The costs of certain retiree health benefits that TVA was paying for from operations were discontinued, while at the same time a supplemental pension benefit was added to the retirement plan. The result, according to TVA officials, was a net cash flow saving of about $20 million to $25 million per year. According to TVA officials and as confirmed by TVA’s fiscal year 1998 audited financial statements, the pension plan is currently overfunded because it has an excess of plan assets over projected benefit obligations of $323 million as of September 30, 1998. TVA does not expect to have to make any additional contributions to the pension plan through 2007. TVA also expects to achieve cost savings from business process improvement initiatives that involve bringing teams of TVA staff together to evaluate how TVA does business. For example, TVA has established teams from throughout the organization to (1) improve the technology used to process information, (2) benchmark best practices of industry as well as individual TVA plants, and (3) adopt identified best practices across the organization. While some teams appear to be well established, others are only getting started. Because these initiatives are in the early stages, their benefits have not yet been quantified, and TVA officials told us that they are only now beginning to identify cost saving techniques that can be shared throughout the organization. As shown in figure 3, TVA substantially achieved the $200 million cost savings goal for fiscal year 1999 by reducing interest costs and changing its retirement plan. Assuming that TVA’s annual savings from refinancing debt and changing its retirement plan average $160 million and $20 million, respectively, TVA must save an additional $20 million annually by improving business processes, refinancing additional debt, and reducing other costs to achieve the $200 million savings assumed in the plan. Since this required additional savings of $20 million is relatively small—less than half of 1 percent of TVA’s fiscal year 1998 operating revenues of $6.7 billion—we believe that it is feasible that these changes will enable TVA to save the additional amount needed to achieve the $200 million annual cost reduction goal. TVA’s revenues increased significantly in fiscal year 1998 due to a rate increase and to increased energy sales. TVA’s fiscal year 1998 revenues totaled about $6.7 billion, compared to $5.9 billion in fiscal year 1997—an increase of about $800 million. According to TVA, about $350 million of the increase is attributed to the rate increase; the balance is attributable to increased sales volume that resulted from extreme weather in the summer months and other factors. The 10-year plan assumes that this rate increase is sustainable and will generate additional revenues of about $325 million annually through 2007. However, based on the decline in TVA’s average revenue per kWh over the past 10 years, and expectations of increasing competition in the electricity industry, we agree with some industry experts who question TVA’s ability to meet the plan’s assumption about future revenue. Specifically, an analyst from the Congressional Budget Office (CBO) with expertise in issues related to TVA and consultants from ICF Kaiser (which was hired by the Edison Electric Institute to analyze TVA’s 10-year plan) questioned TVA’s ability to meet its future revenue projections given the decline in its average revenue per kWh over the last several years. As shown in figure 4, from 1988 through 1997, TVA’s average revenues per kWh declined steadily, despite a steady increase in the amount of kilowatthours of energy sold. This decline in average revenues per kWh was attributable to credits given to large industrial customers. The actual decline in average revenues per kWh over the past 10 years contrasts sharply with the increase projected in the 10-year plan for 1998 through 2007. In order to offer competitive rates to its industrial customers, TVA offers price breaks to its larger industrial customers. In fact, to offset the impact of the last rate increase, TVA expanded its existing credit program to include companies with commitments to purchase firm loads of more than 1 megawatt. (Previously this credit had been limited to industrial customers with firm load commitments of more than 5 megawatts.) Although deregulation of the electric utility industry is expected to put downward pressure on rates, the 10-year plan assumes that TVA will not have to offer any additional price breaks to its large industrial customers through 2007. This assumption is questionable given that TVA has offered new credits to reduce the rates of its larger industrial customers for the past 10 years and competition in the industry is increasing. Because deregulation of the electric utility industry is expected to continue to cause future wholesale and retail electricity prices to fall, TVA will likely feel pressure to continue to reduce rates. In addition, recent media coverage about competition has made many utility customers more aware of price differences among utilities and raised expectations of lower prices. All of these factors combined make it uncertain whether TVA can generate an additional $325 million in annual revenues on a sustained basis through 2007. TVA’s management recognizes that in a competitive environment, its current customers would be free to obtain power from other utilities after giving appropriate notice. Therefore, to improve its future competitive position, TVA’s management decided that it must offer contract flexibility to improve relationships with its customers—159 distributors and 64 industrial and federal concerns. The 10-year plan calls for TVA to build customer allegiance by developing contract and pricing structures that better meet its customers’ needs. TVA has taken actions geared toward this goal. For example, one new contract option allows distributors to change the length of their power contracts with TVA from a rolling 10-year term to a rolling 5-year term, after a period of 5 years (5+5 contract). This 5+5 contract, like all of TVA’s power contracts with its distributors, requires the distributor to purchase all of its electric power from TVA. TVA has also implemented a new program for its large industrial customers that permits customers with power usage of more than 1 megawatt annually to be billed under real-time pricing (RTP), which will enable these customers to reduce their electricity costs by adjusting usage patterns. TVA has implemented the RTP program on a 3-year pilot basis. TVA expects that in the long-term, the RTP program will increase revenues by increasing the demand for power. Both the 5+5 contracts and the real-time pricing program are options that TVA developed as a result of input from customers. Customer groups we contacted were pleased with the efforts TVA is making to provide more flexible contracts. Since these options were developed in response to customer input and the initial customer response has been positive, we determined that TVA’s goal to improve customer relations is achievable. As previously discussed, since the 10-year plan was issued in July 1997, actual experience related to certain key goals and assumptions has differed from that projected in the plan, and certain expectations about the future have changed. For example, TVA officials indicated that if they were to update the 10-year plan today, they would increase their projection for the future market price of power and would include costs for new environmental regulations. However, TVA has not formally updated the plan to reflect these and other changes. Examples of actual experience that differ from expectations in the plan or goals and assumptions that have changed since the plan was developed, along with their impact on the overall plan, are shown in table 2. Changes in individual goals or assumptions or actual experience that differs from that projected when the plan was developed can affect the entire plan. For example, the unplanned purchase of additional generating capacity results in a decrease in projected cash flow through 2007. This affects the availability of cash to pay down debt, which further impacts interest costs. Funding nonpower programs through power revenues has the same effect. The result of these and other unplanned expenditures, such as for new environmental regulations, is that TVA’s time frame to meet its debt reduction goal has been extended from 2007 to 2009. In contrast, any upward change in TVA’s assumption for the future market price of power increases TVA’s target price for power in 2007. This means that TVA could reduce the level of cost reduction and/or revenue enhancement planned through 2007 and still be in a position to offer competitively priced power at that time. TVA officials told us that they have internally analyzed the combined impact of an upward revision in the projected market price of wholesale power in 2007 and lower-than-planned debt reduction on TVA’s ultimate objective, which is to be in a position to offer competitively priced power in 2007. While TVA officials acknowledge that they will not meet the debt reduction goal by 2007, they believe, based on their internal analyses, that TVA will still be in a position to offer competitively priced power in 2007. However, these analyses have not been formalized, nor have the results been communicated to users of the plan. Although TVA views the plan as a living document and recognizes that projections in the plan will change over time, there is no formal mechanism for communicating changes to those who use the plan. In addition, there is no mechanism available to plan users to gauge TVA’s progress toward achieving the plan’s goals and objectives. Therefore, while variances in results, changes in goals and assumptions, and progress toward plan objectives may be known to TVA, they are generally not known by the plan’s users. These users include public policymakers considering legislation that might impact TVA’s future, analysts and investors who use information in the plan when assessing the desirability of TVA’s debt offerings, and customers who are considering alternative sources of electricity in the future. As a result, those who rely on the plan to make investment and policy decisions cannot fully assess the impact of the variances and changes in assumptions on TVA’s ability to meet its strategic objectives as set forth in the plan. The legislation proposed by the administration to promote retail competition in the electric power industry, which was discussed previously in this report, would require that TVA annually report several types of information to the Congress. If enacted, the legislation would require that TVA annually report, among other things, its progress toward its goal of competitively priced power, its prospects for meeting the objectives of the 10-year plan, any changes in assumptions that may have a material effect on its long-range financial plans, the amount by which its debt has been reduced, and the projected amount by which its debt will be reduced. This type of reporting to the Congress would help provide the information needed to monitor TVA’s readiness for a competitive environment. TVA management recognizes the need for TVA to be positioned to compete with other utilities in a changing marketplace. The 10-year plan is moving TVA in the right direction by addressing the most important issues facing TVA: its high fixed financing costs and limited financial flexibility and the large amount of deferred assets that TVA has not recovered through rates. The more progress TVA makes in addressing these issues while it maintains its legislative protections, the greater its prospects for being competitive if it loses these protections in the future. Because TVA’s actual experience and assumptions about the future market price of power, capital expenditures, and planned debt reduction have varied in significant ways from those envisioned in the 10-year plan, it is unlikely that TVA will generate sufficient cash flow to reduce debt and the corresponding fixed interest costs to the extent stated in the plan through 2007. This will impact TVA’s ability to recover the cost of its deferred assets to the extent planned. TVA has acknowledged that its debt reduction goal will not be achieved until at least 2009. To the extent it does not sufficiently reduce debt and related fixed costs and increase financial flexibility during the 10-year period, TVA’s ultimate strategic objective—to be able to offer competitively priced power by the end of 2007—could be jeopardized, depending on market conditions at the time. However, since no one knows what the market price of power will be in 2007, it is uncertain whether TVA will be in a position to offer competitively priced power at that time. TVA could fall short of its objectives and still be competitive if its cost of producing power is at or below market. Conversely, TVA could achieve all of its objectives and not be competitive if its cost of producing power is higher than market. Because of changing electricity markets and other economic conditions, it is essential that TVA continuously update the plan and communicate the results of these updates, as well as TVA’s progress toward its goals and objectives, periodically and formally to the Congress as it considers TVA’s future in a deregulating electricity industry and to other users who have a vested interest in TVA. We recommend that the Chairman of the Board of Directors of the Tennessee Valley Authority move quickly to improve the reporting of information to the plan’s users. Specifically, we recommend that the Chairman ensure that TVA take the following actions: Revise and reissue the plan to reflect evolving conditions and operating plans and their impact on TVA’s ability to meet the strategic objectives outlined in the plan by 2007. TVA should also include a discussion of its plans to recover the costs of its deferred assets. As further significant changes occur, the plan should be updated to communicate these changes to plan users. Periodically communicate its progress toward achieving the 10-year plan’s strategic objectives to those who rely on the information contained in the plan. One option would be for TVA to expand its discussion of the 10-year plan in its annual reports, including reporting how actual results compare to all of the plan’s key goals and assumptions, including those for revenues, debt reduction, capital expenditures, cost savings, and the market price of power; progress toward achieving performance measures related to the plan, an overall assessment of whether TVA is on course to provide competitive power in 2007. In oral and written comments on a draft of this report, TVA generally agreed with the report’s contents. TVA also provided us with technical comments, which we have incorporated as appropriate. TVA’s written comments are reproduced in appendix II and discussed below. TVA commented that the market price of power is the most significant uncertainty in achieving its goal to be in a competitive pricing position as the industry is deregulated. TVA also stated that the target cost of power in the 10-year plan is aggressive and that it has not yet altered its estimate of the future market price of power, even though there are indications of upward movement in market price forecasts. Our report noted that TVA’s target for the cost of its power in the 10-year plan is lower than projections by other knowledgeable sources and therefore forces TVA to be aggressive in pursuing its options to reduce costs and increase revenue. During the course of our review, TVA officials told us that if they were to formally update the 10-year plan, they would increase their projection of the future market price of power. As we note in our report, TVA has not formally updated the 10-year plan, even though certain expectations about the future have changed and actual experience related to key goals and assumptions has differed from projections in the plan. TVA stated that while it will likely incur the costs of funding traditional river management programs that have historically been funded largely through appropriations, the Congress has also enacted legislation allowing TVA to refinance its FFB debt for a savings of over $100 million a year. While we agree with both of these statements, the anticipated savings from refinancing the FFB debt were included in the 10-year plan, but the additional cost of funding traditional river management programs was not. Therefore, for purposes of gauging progress toward achievement of the plan’s goals, the planned savings cannot be assumed to offset these unplanned expenditures. Our report separately discusses each of these points. TVA noted that although its decision to purchase additional generating capacity for periods of peak demand rather than purchasing power from other utilities will adversely impact its ability to reduce debt to the extent planned, it will also help TVA achieve a lower cost of power and improve system reliability. Our report acknowledges these points and states that the decision will impact TVA’s ability to reduce debt, but that TVA believes the decision will reduce the cost of its power and remove the uncertainty of having to rely on another utility for power. We were asked to determine whether the goals and assumptions in TVA’s 10-year plan are achievable or reasonable in light of TVA’s strategic objectives to (1) reduce the cost of power to a competitive level, (2) increase financial flexibility by reducing fixed costs, and (3) build customer allegiance. Specifically, we were asked to determine whether the 10-year plan (1) addresses key issues facing TVA, (2) takes into consideration all applicable costs and revenue sources, (3) contains assumptions that are reasonable and in line with industry estimates and expectations, and (4) has been updated to reflect significant changes in key assumptions or actual experience that differs from TVA’s expectations when the plan was developed. In addition, you asked us, based on our analysis of the plan, to conclude whether TVA is likely to achieve the plan’s strategic objectives. TVA’s plan consists of three strategic objectives, with goals and assumptions designed to help accomplish the strategic objectives. We evaluated the achievability and reasonableness of 10 of the goals and assumptions and their impact on TVA’s ability to accomplish its 3 objectives. Specifically, we assessed the achievability and reasonableness of the following goals and assumptions: the future market price of wholesale power will be 3.4 to 3.5 cents per annual growth in demand through 2007 will average 2 percent; fuel costs will increase 1.7 percent annually through 2007; improvements in supply chain management will save $50 million TVA’s labor force will be reduced, and additional costs savings will be achieved through the creation of shared services and other initiatives; debt will be reduced by about one-half to about $14 billion, and the balance of deferred assets will be reduced from $8.5 billion to $500 million—TVA’s estimated net realizable value of these assets; capital expenditures will be limited to about $600 million annually and increases in demand through 2007 will be met primarily through purchased power; $200 million will be saved annually through cost improvement initiatives primarily related to refinancing Federal Financing Bank (FFB) and public bond debt, pursuing changes to its retirement plan, and improving business processes; revenues from power sales will be increased by about $325 million annually by implementing a rate increase in 1998 and maintaining it through 2007; and customer relations will improve through new contract and pricing options. As agreed with your offices, we did not (1) assess whether achieving the objectives of the plan would ensure TVA’s future competitiveness or (2) develop independent estimates of key elements of the plan, such as the future market price of power. Instead, we relied on comparisons of past performance to future projections, the opinions of industry experts, and economic forecasts made by knowledgeable sources to determine whether the individual components of the plan and the plan as a whole were achievable and reasonable. During the course of our work, we contacted the following organizations. Marshall L. Hamlett, Senior Auditor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Tennessee Valley Authority's (TVA) 10-year business plan, focusing on whether the 10-year plan: (1) addresses key issues facing TVA; (2) takes into consideration all applicable costs and revenue sources; (3) contains goals and assumptions that are achievable or reasonable and in line with industry estimates and expectations; and (4) has been updated to reflect significant changes in key goals and assumptions or actual experience. GAO noted that: (1) implementation of the 10-year plan is moving TVA in the right direction toward its strategic objectives by addressing the key issues it faces--its high fixed financing costs and large investment in nonproducing and other deferred assets that have not been recovered through utility rates; (2) the plan calls for lowering fixed costs by reducing outstanding debt by about one-half--to about $14 billion--by 2007; (3) the plan also provides for the recovery through rates of all but about $500 million of the $8.5 billion in deferred assets outstanding as of the plan issuance date; (4) the year 2007 is key for TVA because it expects to face greater competitive pressures by then and because many long-term contracts with customers could expire at about that time; (5) the plan emphasizes changes designed to enable TVA to offer competitive rates by the end of 2007; (6) while focusing on the right issues, TVA's plan does not fully address certain costs; (7) the plan does not include: (a) the capital costs of increasing generating capacity to meet the growth in demand for power as is now planned; instead, it provides for meeting the growth in demand for power by purchasing power from other utilities; (b) the costs of complying with new and proposed environmental regulations; and (c) the costs of nonpower programs that were formerly fully funded through appropriations; (8) TVA estimates that these additional costs will total about $1 billion over the remaining life of the plan and will likely be higher; (9) GAO also found that while many of the plan's goals and assumptions were achievable or reasonable, certain of them were not, largely due to the additional expected costs described above; (10) some of these additional costs could be offset by increases in expected market rates of power in 2007; (11) because of the additional costs not addressed in the 10-year plan, it is unlikely that TVA can reduce its debt to the extent planned by 2007; (12) estimates in TVA's fiscal year 2000 federal budget request indicate that its debt reduction goal will likely not be achieved until 2009; (13) however, since it is not possible to accurately predict what the market price of power will be in 2007, TVA could still achieve its objective of offering competitively priced power, even if it does not fully achieve the plan's other goals and objectives; and (14) while TVA has acknowledged major changes to several of the plan's goals and assumptions and has factored these into its internal planning, the 10-year plan has not been formally updated to reflect these changes.
You are an expert at summarizing long articles. Proceed to summarize the following text: Railroads are the primary mode of transportation for many products, especially for such bulk commodities as coal and grain. Yet by the 1970s, American freight railroads were in a serious financial decline. Congress responded by passing the Railroad Revitalization and Regulatory Reform Act of 1976 and the Staggers Rail Act of 1980. These acts reduced rail regulation and encouraged greater reliance on competition to set rates. Railroads have also continued to consolidate (through such actions as mergers, purchases, changes in control, and acquisitions) to reduce costs, increase efficiencies, and improve their financial health. The 1976 act limited the authority of the Interstate Commerce Commission (now the Surface Transportation Board) to regulate rates to instances in which there is an absence of effective competition—that is, where a railroad is “market dominant.” The 1980 act made it federal policy to rely, where possible, on competition and the demand for rail services (called demand-based differential pricing) to establish reasonable rates. Differential pricing recognizes that inherent in the rail industry cost structure are joint and common costs that cannot be attributed to particular traffic. Under demand-based differential pricing, railroads recover a greater proportion of these unattributable costs from rates charged to those with a greater dependency on rail transportation. Among other things, the 1980 act also (1) allowed railroads to market their services more effectively by negotiating transportation contracts (generally offering reduced rates in return for guaranteed volumes) containing confidential terms and conditions; (2) limited collective rate-setting to those railroads actually involved in a joint movement of goods; and (3) permitted railroads to change their rates without challenge in accordance with a rail cost adjustment factor. Furthermore, both acts required the Interstate Commerce Commission to exempt railroad transportation from economic regulation in certain instances. The Staggers Rail Act required exemptions where regulation is not necessary to carry out rail transportation policy and where a transaction or service is of limited scope, or where regulation is not needed to protect shippers from an abuse of market power. During the 1980s and 1990s, railroads used their increased pricing freedoms to improve their financial health and competitiveness. In addition, the railroad industry has continued to consolidate in the last 2 decades to become more competitive by reducing costs and increasing efficiencies. (This consolidation continues a trend that has been occurring since the nineteenth century.) In 1976, there were 30 independent Class I railroad systems, consisting of 63 Class I railroads. (Class I railroads are the nation’s largest railroads.) Currently there are 7 railroad systems, consisting of 8 Class I railroads. Half of that reduction was attributable to consolidations. The 8 Class I railroads are the Burlington Northern and Santa Fe Railway Co.; CSX Transportation, Inc.; Grand Trunk Western Railroad, Inc.; Illinois Central Railroad Co.; Kansas City Southern Railway Co.; Norfolk Southern Railroad Co.; Soo Line Railroad Co., and Union Pacific Railroad Co. The Surface Transportation Board is the industry’s economic regulator. The board is a decisionally independent adjudicatory agency administratively housed within the Department of Transportation. Among other things, board approval is needed for market entry and exit of railroads and for railroad mergers and consolidations. The board also adjudicates complaints concerning the quality of rail service and the reasonableness of rail rates. Under the ICC Termination Act of 1995, the board may review the reasonableness of a rate only upon a shipper’s complaint. Moreover, the board may consider the reasonableness of a rate only if (1) the revenue produced is equal to or greater than 180 percent of the railroad’s variable costs for providing the service and (2) it finds that the railroad in question has market dominance for the traffic at issue. If the revenue produced by that traffic equals or exceeds the statutory threshold, then the board examines intramodal and intermodal competition to determine whether the railroad has market dominance for that traffic and, if so, whether the challenged rates are reasonable. From 1997 through 2000, there were two periods during which major portions of the rail industry experienced serious service problems. The first began in July 1997, during implementation of the Union Pacific Railroad and Southern Pacific Transportation Company merger. As a result of aging infrastructure in the Houston, Texas, area that was inadequate to cope with a surge in demand, congestion on this system began affecting rail service throughout the western United States. Rail service disruptions and lengthy shipment delays continued through the rest of 1997 and into 1998. The board issued a series of decisions that generally were designed to enhance the efficiency of freight movements by changing the way rail service is provided in and around the Houston area. These decisions principally focused on the Houston/Gulf Coast area and included an emergency service order to address the service crisis. In addition, CSX Transportation and Norfolk Southern Corporation began experiencing service problems in the summer and early fall of 1999, shortly after they began absorbing their respective parts of the Consolidated Rail Corporation (Conrail). These service problems caused congestion and shipment delays, primarily in the Midwest and Northeastern parts of the country. By early 2000, those service problems had largely been resolved without formal board action. Rail rates generally have continued to fall nationwide for the commodities we studied and in the specific markets we reviewed. However, in several markets rates either increased over the 4-year period for certain commodities or increased and then later fell, resulting in an overall decrease for the period. There may be a variety of reasons why rail rates change over time, including increases or decreases in production or export of various commodities (such as coal or grain); changes in railroad costs; changes in use of contracts that tie rates to specific volumes of business; service problems that could affect the ability of railroads to supply railcars, crews, and locomotive power to meet the demand for rail transportation; or the degree of competition. We do not attempt to identify and explain all the various reasons for changes in the rail rates we examined. Rather, our aim is to put rate changes for particular commodities into context with some of the economic or rail industry conditions that might have affected them from 1997 through 2000. Rates for coal, grain (wheat and corn), chemicals (potassium and sodium compounds and plastic materials or synthetic fibers, resins, or rubber), and transportation equipment (finished motor vehicles and motor vehicle parts or accessories) generally fell from 1997 through 2000. (See fig. 1.) These decreases followed the general trend we previously reported on for the 1990–1996 period and, as before, tended to reflect railroad cost reductions brought about by continuing productivity gains in the railroad industry that have allowed railroads to reduce rates in order to be competitive. From 1997 through 2000, the rates for coal decreased slightly but steadily from about 1.5 cents per ton-mile to about 1.4 cents per ton-mile. Coal production fluctuated over this period but generally decreased from about 1.12 billion tons in 1998 to about 1.08 billion tons in 2000. The production of coal shipped for export also generally decreased from about 83.5 million tons in 1997 to 58.5 million tons in 2000. The Energy Information Administration attributed these decreases to, among other things, a draw down in coal stocks by utilities and reluctance on the part of some coal producers to expand production. Lower demand for rail transportation resulting from lower production generally results in lower rail rates. However, the demand for rail transportation (and consequently rail rates) can also be affected by changes in coal held as inventory and other supply- related factors. Board officials suggested that the decrease in coal rates during this period could also be attributed in part to increasing competition between low-sulfur Powder River Basin coal from the West and higher- sulfur Eastern coal, and to the expiration and resulting renegotiation of many long-term coal transportation contracts. The rates for wheat increased slightly from 1997 to 1998—from about 2.46 cents per ton-mile in 1997 to about 2.47 cents per ton-mile—before falling back in 1999 and 2000 to just under 2.4 cents per ton-mile. Rates for wheat may have decreased because overall production decreased, from 67.5 million tons for the 1997–1998 season to 60.5 million tons for the 2000–2001 season, despite a modest increase in demand for exports (from 28.1 million tons to 30.0 million tons over the same period). Preliminary information indicates that in 1998, the most recent year for which data were available, railroads transported over half (about 55 percent) of all wheat shipments. Corn rates generally decreased from about 2 cents per ton-mile in 1997 to about 1.8 cents per ton-mile in 2000. Corn production fluctuated between 1997 and 1999 (the latest year for which data are available), increasing from 9.2 million bushels in 1997 to 9.8 million bushels in 1998 before falling back to 9.4 million bushels in 1999. However, the domestic use of corn (the primary use of corn) increased by about 4 percent—from 7.3 million bushels in 1997 to 7.6 million bushels in 1999. This increase suggests, all else being equal (including rail costs), greater demand for transportation and possibly higher rail rates. Yet, rail rates for corn are influenced by a number of factors. Significant amounts of corn are produced in areas accessible to navigable waterways and, therefore, the transportation of corn is less dependent on rail. (About 25 percent of corn was shipped by rail in 1998, the latest year for which data are available.) In addition, rates may be affected by the supply of corn. From 1997 through 1999 (the latest year for which data are available) the total supply of corn increased from 10.1 million bushels to 11.2 million bushels. It is possible that intermodal competition, increased domestic use of corn, and an increasing supply of corn may have all influenced rail rates for corn. The rates for chemicals (as illustrated by rates for potassium/sodium and plastics) decreased slightly from 1997 through 2000 at a steady rate. According to data from the American Chemistry Council, the production of chemicals in the potassium/sodium classification increased between 1997 and 1999. Plastics production also steadily increased over the period. These data suggest that, all things being equal, rail rates should have increased over the period because of a higher demand for rail transportation. However, over 65 percent of chemicals are transported less than 250 miles, a distance that is truck competitive, which may indicate that railroad rates are sensitive to truck competition. In addition, not all chemicals that are produced require immediate transportation. An official with the American Chemistry Council told us that chemical manufacturers often produce a product, load it onto railcars, and store the railcars until the product is sold, at which point it is transported to destination. Although the tonnage of chemicals shipped by rail generally increased between 1997 and 2000, railroads accounted for only 20 percent of the tonnage transported in 2000. This is up slightly from the 19 percent transported in 1997. Rates for motor vehicles and parts also generally decreased over the 4-year period, but not at a steady rate. This occurred during a time when U.S. car and truck production generally fluctuated between 12 million and 13 million units. Car production, in particular, generally decreased over the period from about 5.9 million units to about 5.5 million units, according to Crain Communications, Inc., a publisher of Automotive News. The automotive industry is heavily dependent on railroads, and the Association of American Railroads—a railroad trade group—estimates that railroads transport about 70 percent of finished motor vehicles. Automotive production declines, among other things, might have contributed to generally decreasing rail rates. Data on auto parts production were not available. In its own study, the board found that the average, inflation-adjusted rail rate had continued a multi-year decline in 1999 and that, since 1984, real rail rates had fallen 45 percent. It found that real rail rates had decreased for both eastern and western railroads. According to the board, the results of its study implied that, although railroads retain a degree of pricing power in some instances, nearly all productivity gains achieved by railroads since the 1980s (when railroad economic regulation was reduced) have been passed on to rail customers in the form of lower rates. The board estimated that rail shippers would have paid an additional $31.7 billion for rail service in 1999 if revenue per ton-mile had remained equal to its 1984 inflation-adjusted level. The board acknowledged, however, that even though real rail rates had decreased overall, individual rates might have increased and, further, that some rail customers might feel disadvantaged if their rates did not fall to the same extent as their competitors’ rates. Our analysis of rail rates for coal, grain (corn and wheat), chemicals (potassium, sodium, plastics, and resins), and motor vehicles and motor vehicle parts in selected high-volume transportation markets generally showed that rates continued to decrease from 1997 through 2000. However, this was not true in all markets. Rail rates may have been sensitive to competition, and rail rates were generally higher in areas considered to have less railroad-to-railroad competition. Real rail rates for coal, although fluctuating in some markets, generally decreased from 1997 through 2000. In virtually every market we analyzed—both in the East (Appalachia) and in the West (Powder River Basin)—rates decreased. For example, on a medium-distance route from Central Appalachia to Orlando, Florida, rates decreased from about 2.2 cents per ton-mile in 1997 to 1.7 cents per ton-mile in 2000. (See fig. 2.) The 2000 rate was also substantially less than the rate of 2.6 cents per ton- mile in 1990. Competition may have played a role in the decrease in coal rates that we examined. In the West, the two Class I railroads that served the Powder River Basin during the 1990–1996 period, the Burlington Northern and Santa Fe Railway and the Union Pacific, continued to serve the market from 1997–2000. In the East, three Class I railroads served Central Appalachia until mid-1999: Conrail, CSX Transportation, and Norfolk Southern. Following its acquisition by the latter two carriers, Conrail began being absorbed into CSX Transportation and Norfolk Southern in June 1999, with the latter two carriers continuing to serve the market. As part of this transaction, certain areas of Pennsylvania and West Virginia (part of the Appalachia Coal Supply Region) that had been served exclusively by Conrail, although conveyed to Norfolk Southern, are available to CSX on an equal-access basis for 25 years, subject to renewal. Finally, rail rates for coal can be influenced by coal production as well as existing supplies of coal. In general, coal production in the Appalachian area decreased from 1997 to 2000—from about 468 million tons to about 421 million tons. On the other hand, coal production in the Western region (which includes the Powder River Basin) increased between 1997 and 1999—from about 451 million tons to about 512 million tons—before falling back to 510 million tons in 2000. In its 2000 review, the Energy Information Administration noted that coal production in Wyoming (which dominates coal production in both the West and the United States) was driven higher by an increasing penetration of Powder River Basin coal into Eastern markets—an action creating competition for coal produced in the East. Board officials told us that in order for Powder River Basin coal to penetrate Eastern markets, railroads have had to offer very low transportation rates. In addition, they suggested that rail rates for Powder River Basin coal are lower than rail rates for Appalachian coal because of the ability of railroads to use larger (110-car unit) trains to pick up the coal and because of more favorable terrain (flatter and straighter routes) to transport the coal from the mines. Coal supply (as measured by year-end coal stocks) generally fluctuated over the 1997 through 2000 period— increasing from about 140 million tons in 1997 to 183 million tons in 1999, before falling back to 142 million tons in 2000. From 1997 through 2000, real rail rates for shipments of wheat and corn generally stayed the same or decreased for the markets that we reviewed. For example, wheat shipments moving over medium-distance (501 miles to 1,000 miles) routes generally followed this pattern. (See fig. 3.) The exception was wheat shipped from the Oklahoma City, Oklahoma, economic area to the Houston, Texas, economic area. On this route, rail rates generally increased by 12 percent—from 1.9 cents per ton-mile in 1997 to 2.2 cents per ton-mile in 2000. The largest increase occurred between 1997 and 1998, when rates went from 1.9 to 2.1 cents per ton-mile. This increase came at about the same time as the service crisis in the Houston/Gulf Coast area that delayed the delivery of railcars and, in some cases, halted freight traffic. Although board officials did not think railroads used rail rates to allocate the supply of railcars during this time, such an action could have occurred for particular commodities on particular routes. The increases also came at the same time as wheat production in Oklahoma rose from about 170 million bushels in 1997 to just under 200 million bushels in 1998. This may be consistent with an increase in the handling of bulk grain by the Port of Houston Authority between 1997 and 1998, from about 388,000 tons to 1.2 million tons. These factors may also have contributed to a general increase in rail rates for these movements. Even with these increases, the rail rate in 2000 was still less than it was in 1990—about 2.2 cents per ton-mile in 2000, as compared with 2.5 cents per ton-mile in 1990. Rail rates for wheat from the northern plains locations of the Great Falls, Montana, and Grand Forks, North Dakota, economic areas on medium- distance routes generally decreased over the period. Wheat production and demand for rail transportation may have been influencing factors. Although the volume of export wheat was increasing over the 1997 to 2000 period, wheat production in various states fluctuated. For example, wheat production in Montana steadily declined between 1997 and 2000, from about 182 million bushels to about 154 million bushels. In contrast, wheat production in North Dakota (the second highest wheat producing state behind Kansas in 2000) fluctuated between about 240 million bushels and 315 million bushels, alternately increasing and decreasing beginning in 1997. Whether wheat is transported or not depends on many factors, including the price of wheat and the amount of carryover stocks from year to year. In 2001, the U.S. Department of Agriculture reported that grain car loadings on railroads had steadily decreased over the previous 5 years, with the exception of 1999. This was attributed, at least partially, to farmers holding on to grain because of large harvests, large carryover stocks, and low prices. Rate trends between 1997 and 2000 for the shipment of corn were similar to those for wheat. Again, rate trends for corn can be illustrated in the rail rates for medium-distance routes. (See fig. 4.) The rates for most of these routes generally either stayed about the same or decreased over the period. Similar patterns are seen in the other distance categories. However, some rail rates on short-distance routes increased between 1999 and 2000. This was particularly true for corn shipments within the Minneapolis, Minnesota, economic area, where rates went from about 3.5 cents per ton- mile in 1999 to about 4.2 cents per ton-mile in 2000. The specific reasons for this increase are not clear. Corn production in Minnesota generally decreased during this period, from about 990 million bushels in 1999 to about 957 million bushels in 2000. However, in November 1999, the U.S. Department of Agriculture reported that, while corn production and exports were expected to decrease, the domestic use of corn was expected to remain strong, and that domestic use of corn was heavily dependent on rail and truck transportation. Other than livestock feed, domestic use of corn includes corn sweeteners (used in the soft drink industry) and ethanol (a fuel additive). Minnesota also has an active livestock industry, and the state ranked third highest in the country in the number of hogs and pigs produced and hogs marketed in 1999 (behind Iowa and North Carolina). Rail rates for wheat and corn shipments appear to be sensitive to both inter- and intramodal competition. As shown in figure 3, rates for wheat shipments from the Duluth, Minnesota, to the Chicago, Illinois, economic areas—a potential Great Lakes water competitive route—continued to be between 0.72 cents to just under 2 cents per ton-mile lower in 2000 than rates on other medium-distance routes we examined that potentially had fewer transportation alternatives (for example, shipments from Great Falls). In addition, as shown in figure 4, rail rates for corn shipments from the Chicago and Champaign, Illinois, economic areas to the New Orleans, Louisiana, economic area—potentially barge-competitive routes—were substantially lower (up to 1.7 cents per ton-mile in 2000) than rates on other medium-distance corn routes we examined that potentially had fewer transportation choices (for example, shipments from the northern plains states). Sensitivity to intramodal (railroad-to-railroad) competition also continued to be evident. For example, rail rates from 1997 through 2000 for wheat shipments originating in the Wichita, Kansas, and Oklahoma City, Oklahoma, economic areas were about 1.4 cents per ton-mile lower than rail rates for wheat shipments from the Great Falls economic area to the Portland, Oregon, economic area over the same period. The central plains area is considered to have more railroad competition than the northern plains area. Shipment size can also influence railroad costs and, therefore, rates. Loading more cars at one time increases efficiency and reduces a railroad’s costs. From 1997 through 2000, the average shipment size for wheat continued to be higher in the central plains than in the northern plains. For example, the average shipment size for wheat from the Wichita economic area from 1997 through 2000 was about 88 railcars, as compared with about 43 railcars for wheat shipments from the Great Falls economic area. In both instances, the average shipment size increased in the 1997 through 2000 period as compared with the 1990 through 1996 period—by about 17 railcars for wheat shipments from the Wichita area (from about 71 railcars to about 88 railcars) and by about 5 railcars for wheat shipments from the Great Falls area (from about 38 railcars to about 43 railcars). As discussed above, rates in the central plains states were typically lower than those in the northern plains states for the routes we examined. Real rail rate changes for chemical and transportation equipment (motor vehicles and motor vehicle parts) shipments were mixed for the 1997 through 2000 period for the markets we reviewed—some rates fell while others stayed the same or increased. These trends can be seen in short- distance (500 miles or less) shipments of plastics. (See fig. 5.) Two of the more notable trends are shipments within the Beaumont, Texas, and Lake Charles, Louisiana, economic areas. In the Beaumont economic area, real rail rates increased from 42.6 cents per ton-mile in 1997 to 55.8 cents in 1998 before falling to 29.1 cents in 2000. In the Lake Charles economic area, rail rates increased from 25.9 cents per ton-mile in 1996 to 29.7 cents per ton-mile in 1997 before falling (by about 78 percent) to 6.5 cents per ton-mile in 1998. After increasing again in 1999, the rates decreased to 4.8 cents per ton-mile in 2000 on this route. Rates in the other markets generally stayed about the same or decreased. While it is not clear why these rates changed the way they did, the changes came at the time (1997– 1998) of a severe service crisis in the Houston/Gulf Coast area. Board officials said that generally, in their view, it did not appear that railroads used rail rates to allocate resources during the service crisis; they suggested that the erratic nature of the year-by-year rate changes reported for certain of these intra-terminal movements (which, according to the board, tend to be small shipment sizes) may have been related to the heterogeneous nature of this chemicals traffic and to the low sampling rates for smaller shipment sizes—1 in 40 waybills for movements of 1 to 2 car shipments, and 1 in 12 waybills for 3 to 15 car shipments—in the stratified Carload Waybill Sample. Real rail rates for shipments of finished motor vehicles and motor vehicle parts or accessories also showed a variety of trends. In 1999, we reported that one of the more dramatic changes in rates was for the transportation of finished motor vehicles from Ontario, Canada, to Chicago. (See fig. 6.) The rates on this route decreased about 40 percent between 1990 and 1996. Since that time, the rates on this route have largely stabilized at about 12 cents per ton-mile, with a slight increase between 1997 and 2000. In general, rail rates for the transportation of motor vehicle parts or accessories on both long- and medium-distance routes decreased. The notable exception is rates for the transportation of motor vehicle parts or accessories between the Detroit, Michigan, and Dallas, Texas, economic areas. On this route, the rates generally increased from about 9 cents per ton-mile in 1997 to about 22 cents per ton-mile in 2000—about a 139 percent increase. Most traffic in motor vehicles and motor vehicle parts or accessories is either under contract or exempt from economic regulation. Use of contracts suggests that rate decreases may be related to price discounts offered in return for guaranteed volumes of business. However, board officials noted that in recent years, railroads have increasingly been offering motor vehicle manufacturers service packages in which railroads provide premium service for higher rates. This may account for rate increases on specific routes. Between 1997 and 2000, the proportion of all railroad revenue that came from shipments transported at rates that generated revenues exceeding 180 percent of variable costs stayed relatively constant at just under 30 percent. (See fig. 7.) This result is about 2 percentage points less than the average for the 1990–1996 period. In addition to being a jurisdictional threshold for the board to review the reasonableness of rates, revenue-to-variable cost ratios are sometimes used as indicators of shippers’ captivity to railroads. If used in this way, the higher the R/VC ratio, the more likely it is that a shipper can use only rail to meet its transportation needs. Individual commodity results differed markedly. In 2000, 62 percent of chemicals (which include potassium, sodium, and plastics) and 42 percent of coal were transported at rates generating revenues exceeding 180 percent of variable costs. However, only 17 percent of transportation equipment (which includes motor vehicles and motor vehicle parts or accessories) and 32 percent of farm products (which includes wheat and corn) were transported at rates above this level. Board officials suggested that the comparatively high and rising R/VC ratios for chemicals traffic is likely attributable in part to the fact that the railroads’ greater liability exposure associated with transporting hazardous materials is not reflected in the costs attributable to this traffic under the board’s rail costing system. Board officials told us that higher rail rates for transporting hazardous chemicals are reflected in higher revenues for a railroad. However, additional costs incurred because of the higher liability exposure (such as court judgments against a company and set asides for future claims) are shown as special or extraordinary charges that do not become part of the variable costs of a movement in the board’s rail costing system. In contrast to the fairly constant overall proportion of goods shipped with revenues exceeding 180 percent, the results for four broad classes of commodities decreased or increased noticeably. For example, the proportion of farm products transported at above 180 percent R/VC increased from 23 percent to 32 percent from 1997 through 2000, following an increase from 1990 to 1994 (from 22 to 32 percent) and a decline from 1994 to 1996 (from 32 to 23 percent). The proportion of coal shipped above this ratio decreased from 50 percent to 42 percent from 1997 through 2000, continuing a gradual overall decrease from 1990. In some instances, the average R/VC ratios for the 1997–2000 period were considerably higher or lower than the average R/VC ratios for the 1990– 1996 period. For example, the largest increase in average R/VC ratios for the routes that we reviewed was for medium-distance shipments of plastics from the Houston, Texas, economic area to the Little Rock, Arkansas, economic area. On this route, the average R/VC ratio increased by about 64 percentage points—from an average of 154 percent (1990–1996) to an average of 218 percent (1997–2000). The R/VC ratio on this route peaked at 250 percent in 1999. The R/VC ratio on this route was generally increasing while the rail rate was generally decreasing, suggesting that both rates and variable costs were decreasing and that railroads did not pass on all cost reductions to customers in the form of rate reductions. In contrast, the largest decrease in average R/VC ratios for the routes we examined was about 116 percentage points, which occurred for motor vehicle shipments between the Chicago economic area and the Dallas economic area—from an average of 240 percent (1990–1996) to an average of 124 percent (1997– 2000). Over this latter period, rail rates on this route decreased from about 8.7 cents per ton-mile in 1997 to about 8 cents per ton-mile in 2000. This suggests that variable costs increased during this period. The R/VC ratios we observed are consistent with railroads’ ability to use differential pricing, and they are sensitive to competition. For example, over the 1997–2000 period and the 1990–1996 period, the R/VC ratio for medium-distance shipments of wheat from the Great Falls economic area (a northern plains location) exceeded those for wheat shipments from the Wichita, Oklahoma City, and Duluth economic areas for the specified destinations. (See fig. 8.) There are fewer potentially competitive alternatives to rail in the northern plains states. In contrast, shipments originating in the central plains states (for example, from Wichita and Oklahoma City) are considered by some to have more alternatives to rail than in the northern plains. Duluth (a northern plains origin) offers a competitive alternative of transportation by water. The anomaly appears to be medium-distance wheat shipments originating in the Grand Forks, North Dakota, economic area (a northern plains origin) transported to the St. Louis, Missouri, economic area. The R/VC ratio for this route, although consistently above the R/VC ratio for shipments from the Duluth economic area (with potential water competition), was generally below that of Wichita and Oklahoma City (with potentially more rail competition) from 1997 through 2000. This suggests that wheat shipments on this route may have been sensitive to barge competition from the Mississippi River or rail competition in the central plains states or the Midwest. The use of R/VC ratios has limitations. In particular, the ratios are subject to misinterpretation because they are simple divisions of revenues by variable costs. It is possible for rates paid by shippers to be dropping while the R/VC ratio is increasing—a seemingly contradictory result. For example, if revenues (which are the rates paid by shippers) are $2 and variable costs are $1, then the R/VC ratio is 200 percent. If costs decrease by 50 cents and railroads pass this cost decrease on to shippers by decreasing rates by 50 cents, the R/VC ratio becomes 300 percent. Therefore, by itself, the R/VC ratio could suggest that railroads are using their market power to make shippers worse off when this might not be the case. Board officials suggested that the R/VC ratio shown in figure 8 for the movement of wheat from the Great Falls economic area to the Portland economic area is one such instance of this. In this case, rail rates from Great Falls generally decreased over the 1997 through 2000 period from about 3.5 cents per ton-mile in 1997 to about 3.2 cents per ton-mile in 2000. Board officials said unit costs were also decreasing, in part, because of increases in shipment size and various carrier-specific productivity improvements related to the 1995 Burlington Northern Railroad merger with the Atchison Topeka & Santa Fe Railway Company. The R/VC ratio on this route increased from 240 percent in 1997 to 308 percent in 2000. Similarly, using the example above, if variable costs increase by 50 cents (from $1 to $1.50) and railroads increase their rates by the same amount (from $2 to $2.50), then the R/VC ratio becomes 167 percent. Again, the R/VC ratio alone would suggest that shippers are better off—because the R/VC ratio decreased from 200 percent—when this might not necessarily be the case. Although R/VC ratios have limitations, they can be useful indicators of railroad pricing and of whether railroads may be using their market power to set rates. As described previously, the R/VC ratio is a jurisdictional threshold for the Surface Transportation Board to consider rate relief cases. The board uses other analytical techniques to determine whether rates are reasonable. We provided a draft of this report to the Surface Transportation Board and the Department of Transportation for their review and comment. The board provided its comments in a meeting that included its general counsel and chief economist. In general, the board agreed with the material presented in our draft report and stated that it accurately portrayed rail rate trends over the period of our study. It said that the overall trend of declining rates that we found is consistent with studies and analyses prepared by the board. Board officials said that, while it can be difficult to identify with specificity the reasons why rail rates might change in the short run, especially rates for specific commodities over specific routes, the draft report did an admirable job in discussing factors that could influence rate changes. Among the specific comments made were (1) that low rail rates have allowed Western coal to penetrate Eastern coal markets and (2) that R/VC ratios for chemicals may not fully reflect the costs of increased liability exposure faced by railroads in transporting hazardous chemicals. We made changes to the report to reflect the board’s comments. The board offered additional clarifying, presentational, and technical comments that, with few exceptions, we incorporated into our report. The Department of Transportation, in oral comments made by the director, Office of Intermodal Planning and Economics, Federal Railroad Administration, said that the report fairly and accurately portrayed the changes in railroad freight rates over the study period, and that rail rates were responsive to market conditions and competition. The department suggested that our Results in Brief section should indicate that R/VC ratios cannot be relied upon as measures of railroad market power. We modified the Results in Brief to provide a fuller discussion of R/VC limitations. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 21 days after the date of this letter. At that time, we will send copies of this report to congressional committees with responsibilities for freight railroad competition issues; the administrator, Federal Railroad Administration; the chairman, Surface Transportation Board; and the director, Office of Management and Budget. We will also make copies available to others upon request. This report will also be available on our home page at http://www.gao.gov. If you or your staff have any questions about this report, please contact either James Ratzenberger at [email protected] or me at [email protected]. Alternatively, we may be reached at (202) 512-2834. Key contributors to this report were Stephen Brown, Richard Jorgenson, and James Ratzenberger. As for our 1999 report, we used the board’s Carload Waybill Sample to identify railroad rates from 1997 through 2000 (the latest data available at the time of our review), which we then analyzed to determine rate changes. The Carload Waybill Sample is a sample of railroad waybills (in general, documents prepared from bills of lading authorizing railroads to move shipments and collect freight charges) submitted by railroads annually. We used these data to obtain information on rail rates for specific commodities in specific markets by shipment size and length of haul. According to board officials, revenues derived from the Carload Waybill Sample are not adjusted for such things as year-end rebates and refunds that may be provided by railroads to shippers who exceed certain volume commitments. Some railroad movements contained in the Carload Waybill Sample are governed by contracts between shippers and railroads. To avoid disclosure of confidential business information, the board disguises the revenues associated with these movements before making this information available to the public. Consistent with our statutory authority to obtain agency records, we obtained a version of the Carload Waybill Sample that did not disguise revenues associated with railroad movements made under contract. Therefore, the rate analysis presented in this report presents a truer picture of rail rate trends than analyses that may be based solely on publicly available information. Since much of the information contained in the Carload Waybill Sample is confidential, rail rates and other data contained in this report that were derived from this database have been aggregated at a level sufficient to protect this confidentiality. As in our 1999 report, we analyzed coal, grain (wheat and corn), chemicals (potassium and sodium compounds and plastic materials or synthetic fibers, resins, or rubber), and transportation equipment (finished motor vehicles and motor vehicle parts or accessories) shipments. These commodities represented about 52 percent of total industry revenue in 2000 and, in some cases, had a significant portion of their rail traffic transported on routes where the ratio of revenue to variable costs equaled or exceeded 180 percent. We used rate indexes and average rates on selected corridors to measure rate changes over time. A rate index attempts to measure price changes over time by holding constant the underlying collection of items that are consumed (in the context of this report, items shipped). This approach differs from comparing average rates in each year because, over time, higher- or lower-priced items can constitute different shares of the items consumed. Comparing average rates can confuse changes in prices with changes in the composition of the goods consumed. In the context of railroad transportation, rail rates and revenues per ton-mile are influenced, among other things, by average length of haul. Therefore, comparing average rates over time can be influenced by changes in the mix of long- and short-haul traffic. Our rate indexes attempted to control for the distance factor by defining the underlying traffic collection to be commodity flows occurring in 2000 between pairs of census regions. To examine the rate trends on specific traffic corridors, we first chose a level of geographic aggregation for corridor endpoints. For grain, chemical, and transportation equipment traffic, we defined endpoints to be regional economic areas defined by the Department of Commerce’s Bureau of Economic Analysis. For coal traffic, we used economic areas to define destinations and used coal supply regions—developed by the Bureau of Mines and used by the Department of Energy—to define origins. An economic area is a collection of counties in and about a metropolitan area (or other center of economic activity); there are 172 economic areas in the United States, and each of the 3,141 counties in the country is contained in an economic area. As in our 1999 report, we placed each corridor in one of three distance-related categories: 0–500 miles, 501–1,000 miles, and more than 1,000 miles. Although these distance categories are somewhat arbitrary, they represent reasonable proxies for short-, medium-, and long- distance shipments by rail. To address issues related to revenue-to-variable cost ratios we obtained data from the board identifying revenues, variable costs, and R/VC ratios for commodities shipped by rail at the two-digit Standard Transportation Commodity Code level. We used data from the Carload Waybill Sample to identify the specific revenues and variable costs and to compute R/VC ratios for the commodities and markets we examined. Using this information we then identified those commodities and markets whose R/VC ratios were consistently above or below the 180 percent R/VC level. We performed our work from December 2001 through May 2002, in accordance with generally accepted government auditing standards. The following are real (inflation-adjusted) rail rates for coal shipments in the various markets and distance categories we reviewed. The distance categories are as follows: short is 0 to 500 miles, medium is 501 to 1,000 miles, and long is greater than 1,000 miles. The following are real (inflation-adjusted) rail rates for wheat and corn shipments in the various markets and distance categories we reviewed. The distance categories are as follows: short is 0 to 500 miles, medium is 501 to 1,000 miles, and long is greater than 1,000 miles. The following are real (inflation-adjusted) rail rates for selected chemical and transportation equipment shipments in the various markets and distance categories we reviewed. The distance categories are as follows: short is 0 to 500 miles, medium is 501 to 1,000 miles, and long is greater than 1,000 miles. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
The Railroad Revitalization and Regulatory Reform Act of 1976 and the Staggers Rail Act of 1980 gave freight railroads increased freedom to price their services according to market conditions. A number of shippers are concerned that freight railroads have used these pricing freedoms to unreasonably exercise their market power in setting rates for shippers with fewer alternatives to rail transportation. This report updates the rate information in GAO's 1999 report (RCED-99-93) using selected commodities and with effective competitive transportation alternatives. From 1997 through 2000, rail rates generally decreased, both nationwide and for many of the specific commodities and markets that GAO examined. However, rail rates for some commodities and distance categories--such as wheat moving long distances and coal moving short distances--have stayed about the same or increased. In other instances, such as wheat moving medium distances, rail rates stayed about the same or decreased. Overall, the proportion of rail shipments above the Surface Transportation Board's statutory jurisdictional threshold for considering rate relief actions--where railroad revenues for the shipment exceed 180 percent of variable costs--stayed relatively constant at 30 percent from 1997 through 2000. However, the proportion of shipments for which revenues exceeded variable costs by 180 percent varied, depending on commodity and markets.
You are an expert at summarizing long articles. Proceed to summarize the following text: Consular officers issued about 6.2 million nonimmigrant visas in 1996—an increase of approximately 16 percent over the number issued in 1992. The total budget for consular relations activities has also increased significantly in recent years. The budget grew from about $259 million in fiscal year 1992 to an estimated $470 million in fiscal year 1998. The State Department’s Bureau of Consular Affairs Program Plan for fiscal years 1998-99 (an annually updated planning document containing strategies for executing the Bureau’s mission) notes that the greatest demand for visas is in advanced developing countries such as Brazil and South Korea, among others. Table 1 shows the numbers of nonimmigrant visas issued at the top five nonimmigrant visa-issuing posts in fiscal year 1996. Foreign visitors traveling to the United States are a significant source of revenue for U.S. businesses. According to the Department of Commerce’s International Trade Administration Tourism Industries Office, foreign visitors spent close to $70 billion in the United States in 1996. The office’s figures indicate that Brazilian visitors spent over $2.6 billion in the United States, or more than $2,900 per visit, during the same period. In order to safeguard U.S. borders and control the entry of foreign visitors into the country, U.S. immigration laws require foreign visitors from most countries to have a visa to enter the United States. However, the United States currently waives the requirement for visitor visas for citizens of 26 countries considered to pose little risk for immigration and security purposes. According to a consular official, Brazil does not currently qualify for visa waivers primarily because the refusal rate for Brazilian visa applications exceeds the allowable limit of less than 2.5 percent in each of the previous 2 years and less than a 2 percent average over the previous 2 years. The Department of State has primary responsibility abroad for administering U.S. immigration laws. Consular officers at overseas posts are responsible for providing expeditious visa processing for qualified applicants while preventing the entry of those that are a danger to U.S. security interests or are likely to remain in the United States illegally. State’s Bureau of Consular Affairs develops policies and manages programs needed to administer and support visa-processing operations at overseas posts and has direct responsibility for U.S.-based consular personnel. State’s geographic bureaus, which are organized along regional lines (such as the Bureau of Inter-American Affairs), have direct responsibility for the staffing and funding of overseas consular positions. The process for handling nonimmigrant visas varies among overseas posts. Among the methods used to serve visa applicants, posts (1) receive applicants on a “first-come, first-served” basis, (2) operate appointment systems to schedule specific dates and times for applying, (3) employ travel agencies to act as intermediaries between applicants and the consulate, and (4) use “drop boxes” for collecting certain types of visa applications. Individual posts may use one or various combinations of these approaches. In addition to submitting a written application and supporting documentation, an applicant must be interviewed by a consular officer, unless the interview is waived. Consular officers may request additional documentation to validate the applicant’s intention to return home or confirm that sufficient financial resources are available for the trip. Consular officers are also responsible for deterring the entry of aliens who may have links to terrorism, narcotics trafficking, or organized crime. Nine of the 26 consulates we reviewed, including the one in Sao Paulo, experienced backlogs in processing nonimmigrant visas to the United States in fiscal year 1997. The backlogs ranged from 8 to 52 days and occurred primarily during peak travel seasons for tourists. State does not systematically compile information on visa processing turnaround times at overseas posts nor has it established a time standard for processing visas. However, the Deputy Assistant Secretary for Visa Services indicated that a maximum wait of 1 week (5 business days) for an appointment to apply for a nonimmigrant visa is desirable. She also told us that an additional 1 or 2 days are generally needed to process the visa after the appointment occurs. Thus, we concluded that a maximum desirable total turnaround time for appointment system cases would generally be 7 business days. Since the total turnaround times for other processing methods are generally shorter than for appointment systems, we used 7 business days as a cutoff point beyond which we considered a backlog to exist for all processing methods. Although consulates often manage to process nonimmigrant visa applications within 7 business days during periods of low demand, turnaround times lengthen significantly at some consulates when demand is high. Peak periods generally occur during the summer months or winter holiday season. Of the nine posts that had peak-season backlogs exceeding 7 business days, four had turnaround times that were less than 15 business days and five had turnaround times that were 15 business days or more. These figures represent the highest turnaround times that posts reported among the various application methods that they use. Table 2 lists the total turnaround times for processing visas during peak periods at the five posts that had backlogs that were 15 business days or more in fiscal year 1997. At the consulate in Sao Paulo, Brazil, turnaround times varied depending on the visa processing method involved. In fiscal year 1997, about 63 percent of the consulate’s nonimmigrant visa applications were submitted through travel agents, and about 27 percent were handled through the consulate’s appointment system. The remaining 10 percent were processed using other methods such as a “drop box.” Visa applications submitted through travel agents were subject to a total turnaround period of 10 business days during periods of high demand and less than 5 business days during periods of low demand. Turnaround times for those who requested an appointment to apply for a visa reached as long as 20 days during busy periods—twice the length we noted in our 1992 report on visa-processing backlogs. In nonpeak periods, the turnaround time for those who requested appointments was 9 business days. For fiscal year 1997, approximately 86,000 applicants used the consulate’s appointment system. Consulate officials told us that the turnaround time for applications received through the “drop-box” method is generally kept within 5 business days during both peak and nonpeak periods. State pointed out that, while the Sao Paulo consulate’s turnaround times have increased since 1992, the volume of nonimmigrant visa applications processed in Sao Paulo has also increased from 150,088 in fiscal year 1992 to 319,341 in fiscal year 1997. State reported that the Sao Paulo consulate processed an average of 1,250 nonimmigrant visas per day in fiscal year 1997. During the same period, the number of consular section foreign service officer positions increased from four to seven. In 1995, the Sao Paulo consulate established an appointment system to alleviate long lines outside the consulate that were causing complaints from neighbors and negative reports in the local press. The consulate also began employing appointment delays as a disincentive to applying in person and to encourage applicants to apply for visas through the consulate’s travel agency program—a technique that it considered to be more efficient. As part of this approach, the consulate initiated a practice of not scheduling any appointments on Wednesdays, so that consular officers could concentrate on processing travel agency cases that day. Sao Paulo consular officials told us that this approach had been successful in reducing the length of applicant lines, increasing the use of the consulate’s travel agency program, and improving productivity. On the other hand, the total turnaround time increased for those applying for visas in person through the appointment system. According to the Consul General in Brasilia, the Sao Paulo consulate’s appointment system and its practice of closing to the public on Wednesdays unfairly penalizes applicants that apply in person. He said that the consulate should develop an approach that enables it to provide high levels of service for all application methods. Officials in State’s Bureau of Inter-American Affairs told us that the Brazil Desk received an average of one complaint per week from U.S. companies concerning difficulties that their Brazilian business associates were having in obtaining visas in Sao Paulo. The Consul General in Brasilia said that as many as 10 visa applicants from the Sao Paulo consular district underwent the inconvenience of traveling to and applying for visas in Brasilia each day rather than in Sao Paulo because they had encountered delays and other difficulties in Sao Paulo. He added that an additional unknown number travel to the consulate in Rio de Janeiro each day or simply elect not to travel to the United States at all. Representatives of the travel industry in Brazil told us that, while there have been substantial improvements in reducing visa backlogs and long lines at the Sao Paulo consulate in recent years, they still receive complaints about the length of time that it takes to obtain a U.S. visa in Sao Paulo. A representative of the American Chamber of Commerce in Brazil agreed that there had been improvements in recent years but said that the process remains particularly troublesome for Brazilian business executives who sometimes need to obtain visas on an emergency basis for unexpected business trips to the United States. Consular officers face a number of obstacles to providing expeditious service in processing visas. Inadequate consular staffing at overseas posts and other staffing-related issues were identified as barriers to timely processing of visas by the majority of posts that we reviewed. Other impediments to efficient processing include inadequate computer systems, equipment, and consular facilities. Increased attention devoted to preventing suspect applicants from entering the United States has also led to delays. Similar to what we reported in 1992, consular personnel cited staffing problems as some of the most persistent barriers to processing visas efficiently. Nineteen of the 26 consulates we reviewed reported staffing problems, such as staffing gaps due to transfers of foreign service officers during peak periods or inadequate permanent staffing positions. Of particular concern were staffing gaps that occurred during peak seasons. Since the summer months are among the busiest periods for processing nonimmigrant visas at many posts, consular sections should be operating at full capacity during these periods. However, according to consular officials, they often are not because State’s annual personnel reassignments take place then. A consular official in Bogota told us that the lengthy wait for appointments there was due in large part to extended staffing gaps. Officials in the Bureau of Consular Affairs said that State’s system of mass employee transfers during the summer months is intended to promote fairness in the assignment bidding process and convenience for officers with school-age children, even though it does not result in optimal staff coverage during peak periods. Some consulates reported that, even when all of their authorized positions are filled, staffing levels are inadequate, particularly at posts that have experienced significant increases in visa demand. Figure 1 depicts overseas foreign service officer staffing for visa services and nonimmigrant visa work load trends from fiscal years 1993 through 1996. According to a senior consular official, the hiring of junior officers—the primary source of consular staff support—has not kept pace with foreign service officer attrition over the last several years. This has resulted in staffing shortages in consular sections at many overseas posts. The Bureau of Consular Affairs Program Plan for fiscal years 1998-99 stated that the shortage of consular officers had seriously undermined efforts to meet the increasing demand for consular services. Another staffing issue that consular officials raised concerned State’s process for allocating staff at overseas posts. The Bureau of Consular Affairs does not control assignments of consular positions at overseas posts; rather, State’s geographic bureaus are in charge of these positions. Consular officials said that this arrangement causes delays in reallocating positions to correspond with shifting work loads at various posts. Such reallocations are particularly troublesome when they involve moving positions from one geographic bureau to another. For example, if a U.S. consulate in a Latin American country encountered a significant increase in consular work load while a consulate in East Asia experienced a corresponding decline, the Bureau of Consular Affairs would not have the authority to shift one or more consular positions from one consulate to the other. Rather, it would have to convince the Bureau of East Asian and Pacific Affairs to relinquish the positions and the associated funding, while persuading the Bureau of Inter-American Affairs to accept them. A senior consular official told us that the Bureau of Consular Affairs had recently proposed to the Under Secretary for Management that the Bureau be given greater control over the staffing and funding of overseas consular positions. The official said that the Under Secretary for Management is still considering the proposal. With regard to the adequacy of staffing in Sao Paulo in particular, consulate officials there told us that consular section staffing is insufficient to meet the high demand for nonimmigrant visas. The officials said that, due to transfers of foreign service officers and other factors, the unit had been staffed with a full contingent of authorized positions for only 6 months in the last 2 years. In addition, even when the section is fully staffed, the number of authorized positions is inadequate. At the time of our recent visit to Sao Paulo, the nonimmigrant visa section had seven foreign service officer positions, one of which was vacant. The unit also had 19 foreign national employee positions, including a receptionist, and 4 U.S. family member positions, 1 of which was vacant. Consular section officials said that, to reduce visa backlogs to within 7 working days, they would need two additional foreign service officers, five additional foreign national employees, and two additional U.S. family member employees. The Sao Paulo consular section sometimes employs additional U.S. family members to provide assistance on a temporary basis but has experienced problems securing such staff in time to optimize their help during peak periods. Consulate officials told us that the complexities of the various funding and hiring mechanisms for obtaining temporary staff make it difficult to quickly hire them. The officials added that the low salaries for family member staff also make it hard to attract applicants among the few eligible family members at the post. According to a senior consular official, there are no current plans to address staffing shortages specifically at the consulate in Sao Paulo. The official said that State has staffing shortages worldwide and that it plans to hire new foreign service officers to help deal with the shortages. Sao Paulo’s permanent position staffing needs will be considered along with the needs of other posts as part of the normal resource allocation process. The official added that State has also taken measures to temporarily fill peak season staffing gaps in overseas consular sections. Consular officials pointed to inadequate computer and other equipment as further barriers to efficient visa processing. Fourteen of the 26 consulates we reviewed reported to us that they had such problems. One consulate noted that the vast majority of delays in processing visas were caused by computer equipment and systems failures. Another consulate reported in its “consular package” (an annual report to the Bureau of Consular Affairs on each post’s consular operations) that frequent and prolonged breakdowns in the system for performing name checks on visa applicants had hindered visa processing during the peak summer season. Consular officials told us that there is a need for additional and better auxiliary equipment such as high-capacity fax machines and telephone answering machines. Inadequate physical facilities also impede efficient visa processing at some consulates —a problem noted in our 1992 report as well. Thirteen of the 26 consulates we reviewed identified poor work space or inadequate physical structures as a major impediment to efficient processing. For example, Sao Paulo consular officials said that inadequate space limited their options for dealing with increased demand for visas. To illustrate this problem, the consulate had been able to offer a relatively short turnaround time for former visa holders who dropped off their applications for renewal near the entrance to the consulate grounds; there, a foreign national employee provided information, determined whether the applicant qualified for this method, and checked the applications for completeness. However, there is insufficient physical space to expand the use of this method at this location. Consulate officials told us that they could explore the use of an offsite location for collecting “drop-box” applications. As a result of heightened concerns about terrorism and illegal immigration in recent years, the U.S. government launched a number of initiatives to strengthen U.S. border security. These efforts included financing new technology for providing consular officers with comprehensive information on persons who may represent a threat to U.S. security. Consular officials noted that, although the enhanced systems helped bolster border security, they sometimes resulted in increased visa-processing times. For example, name-check systems now identify many more applicants as potential suspects; therefore, consular officers must take additional time to review these cases in determining eligibility for visas. Achieving an appropriate balance between the competing objectives of facilitating the travel of eligible foreign nationals to the United States and preventing the travel of those considered ineligible poses a difficult challenge for consular officers. Consular officers told us that a renewed emphasis on holding them personally accountable for visa decisions on suspect applicants had led to greater cautiousness and an increase in the number of requests for security advisories from Washington. As a result, while same-day processing of visas used to be commonplace, consular officials told us that greater requirements related to border security had made same-day service more the exception than the rule. State has made a number of changes in an effort to improve its visa-processing operations in recent years, and some of these initiatives could help in overcoming barriers to timely visa issuance. It has devised methods for handling staffing problems and developed a model to better plan for future resource needs at consulates abroad. State has improved computer and telecommunications systems and has other equipment upgrades underway, some of which will help address visa-processing problems. In addition, State has undertaken an initiative to identify and implement better work load management practices for visa processing at overseas posts. However, State has yet to define and integrate time standards as part of its strategy to improve the processing of nonimmigrant visas. Establishing such standards could help in identifying visa-processing backlogs, better equipping State to determine the corrective measures and resources needed. According to a senior consular official, State plans to hire over 200 new foreign service officers in fiscal year 1998 to help solve staffing shortages created by gaps between hiring and attrition levels in recent years. State has also begun experimenting with a number of approaches to fill peak-season staffing gaps at overseas consular sections. For example, the Bureau of Consular Affairs recently established a cooperative program with American University, located in Washington, D.C., to hire and train university students to work in consular positions in Washington, thus allowing the consular personnel that hold these positions to temporarily fill summer staffing gaps overseas. The Bureau also recruits retired foreign service officers to fill overseas consular staffing gaps on a temporary basis and is developing a “consular fellows” pilot program to fill vacant entry-level consular positions. The fellows program involves hiring temporary employees with foreign language skills to serve as consular staff on a short-term basis. State has also expanded the use of temporary employment of U.S. foreign service family members at overseas posts in recent years. Family members often perform administrative and procedural tasks in support of consular officers. Officials at one post told us that extended staffing gaps and shortages had caused them to rely on family member employees to perform a wider range of duties than they had in the past. The officials said that doing so enabled the post to keep its nonimmigrant visa-processing turnaround time under 7 business days. State has developed a consular staffing model based on visa work load and related information that it plans to use to help determine adequate consular staffing and to help identify personnel from surplus areas that could be moved to understaffed ones. The current model does not include foreign national employees—an important element of overall consular staffing at overseas posts. Also, according to one consular official, the model may be based on outdated data that does not take into account the increased visa demand and other changes in some countries. State is refining and updating the model to address these limitations and to factor in the impact of other visa-processing improvement efforts. State made major investments in computer and telecommunications infrastructure in recent years and has other equipment upgrades under way for overseas posts that issue visas. For example, every visa-issuing post now has a machine-readable visa system and automated name-check capability. State has also begun installing second generation upgrades to the machine-readable visa system at posts. State plans to install the necessary hardware and software to run this upgraded system at 100 posts in fiscal year 1998 and to have the system in all visa-issuing posts by the end of fiscal year 1999. The equipment upgrades have resulted in significant improvements in some aspects of visa processing. For example, improvements in some backup systems for name checks now allow visa processing to continue when on-line connections with Washington are not operating. In the past, such disruptions resulted in significant delays in processing visas. More importantly, according to consular officials, the upgrades have resulted in better and more comprehensive information about applicants who might pose a security threat, thus contributing to higher quality decision-making with respect to visa applications. In an effort to identify and implement better work load management practices for visa processing, State established a Consular Workload Management Group in November 1996. Although the effort is still ongoing, the group has already identified a number of practices. Among them were the following: Recorded General Information. This system allows the applicant to get information about the application process without tying up staff resources. A 900-type telephone number, in which the user pays the cost of a call, can be established for this purpose. An Appointment System. An appointment system can reduce the applicant’s waiting time in line and enable the post to control its work load by specifying the number of applicants who can be seen in a given day. Such a system allows an applicant to schedule an interview at a specific date and time. Prescreening. This procedure requires an employee to ask an applicant a few questions and to quickly determine whether the applicant is clearly eligible to receive a visa or whether the applicant must be interviewed by an officer. Noncashier Fee Collection. This process allows applicants to pay the machine-readable visa fee at a bank or other financial institution. The applicant then presents the fee payment receipt when processing the application, thus eliminating the need for a cashier at the post to handle the fee transaction. Travel Agency/Corporate Referral Program. This practice allows posts to designate selected travel agencies and large companies to perform some initial processing of nonimmigrant visa applicants who meet certain criteria. Agencies and companies are trained to ensure that applicants’ documents are in order and are frequently asked to enter pertinent data on the application form. In some cases, agencies and companies forward information to the post electronically, usually via computer diskette. Other practices identified include public information campaigns urging applicants to apply well in advance of their intended travel dates and the use of color-coded boxes to simplify the return of passports on particular days. Some of the practices identified are easy to implement, such as color coding; others are more complex, such as establishing noncashier fee collection systems. The willingness and ability to implement these practices varies by post. According to consular officials, State is currently in the process of identifying posts that are already employing these practices. It is important to note that, while some of these practices can aid in better managing consular work loads, the use of such tools does not guarantee a reduction in visa-processing times. In some cases, these techniques may actually contribute to backlogs, depending on how they are managed. One of the most controversial tools in this respect is the appointment system. According to some consular officials, posts inevitably schedule fewer appointments per day than the number of applicants, causing backlogs and public relations problems. Consular management must deal with increased phone calls and requests for emergency processing when the wait for an appointment becomes unreasonably long. All nine of the surveyed posts that had peak-season backlogs in fiscal year 1997, including the consulate in Sao Paulo, used appointment systems. On the other hand, some high-volume posts that did not use appointment systems managed to keep the total turnaround time for processing visas under 7 business days, even in periods of very high demand. For example, in Rio de Janeiro, the total turnaround time for processing “walk-in” nonimmigrant visa applications was 2 days during peak and nonpeak seasons. The post in Mexico City issued visas the same day that applicants walked in, whether in peak or nonpeak seasons; however, a post official told us that applicants often have to wait for several hours in line. According to Deputy Assistant Secretary for Visa Services, State does not systematically compile information on visa processing turnaround times at overseas posts nor has it established formal timeliness standards for visa processing. State’s consular guidance makes references to the importance of minimizing waiting time and return visits for visa applicants but does not specifically address total turnaround time. On the other hand, State has timeliness standards for issuing passports to U.S. citizens within 25 days after receiving the application. The usefulness of such standards in helping to manage for results is now widely recognized. Some consulates continue to experience backlogs in processing nonimmigrant visas. Although State has taken a number of actions to improve its visa-processing operations, it has not made a systematic effort to identify and address visa-processing backlogs on a global basis. We believe that State’s improvement efforts need to be guided by formal timeliness standards for issuing nonimmigrant visas. Establishing such standards could assist in identifying backlogs, putting State in a better position to determine the resources and actions needed to correct them. Timeliness standards could also help State’s efforts to implement better work load management practices and to improve long-range planning for staffing and other resource needs. To determine the appropriate level and mix of resources needed and to take full advantage of ongoing efforts to improve visa operations, we recommend that the Secretary of State develop timeliness standards for processing nonimmigrant visas. In its written comments on a draft of this report, State said that the report was a balanced and informative account of the problems faced by consular posts abroad. While State did not directly disagree with the report’s recommendation that it develop timeliness standards for processing nonimmigrant visas, State indicated that setting and meeting such standards should be linked to the adequacy of resources. State also expressed concern that timeliness standards might be overemphasized to the detriment of border security goals. State said that imposing rigid standards could adversely affect consular officers’ thoroughness in scrutinizing visa applicants. We agree that setting and meeting timeliness standards should be linked to the adequacy of resources. In fact, we believe that such standards could assist in identifying backlogs, and therefore put State in a better position to determine the level of resources needed to achieve desired levels of both service and security. They could also help State to better manage its resources. We recognize the importance of maintaining quality in the adjudication of visas and believe this element should be built into any timeliness standards or implementing regulations. We also note that some of State’s overseas posts have already established their own timeliness standards for processing nonimmigrant visas and have managed to meet them, even though some of these posts are located in areas considered to be at high risk for visa fraud. We are sending copies of this report to the Secretary of State and interested congressional committees. We will also make copies available to others upon request. Please contact me at (202) 512-4128 if you or any of your staff have any questions concerning this report. The major contributors to this report are listed in appendix III.
Pursuant to a congressional request, GAO reviewed how Department of State consulates process visas for visitors (nonimmigrants) to the United States, focusing on the: (1) extent and nature of visa processing backlogs in Sao Paulo, Brazil, and at other consulates; (2) factors affecting consulates' ability to process nonimmigrant visas in a timely manner; and (3) activities planned or under way to improve nonimmigrant visa processing. GAO noted that: (1) visa processing backlogs are a problem for some consulates, including the one in Sao Paulo; (2) the visa backlogs at the consulates GAO reviewed varied widely, ranging from 8 to 52 days; (3) the longest delays occurred during peak travel periods such as the summer months and winter holiday season; (4) factors that affected consulates' ability to process nonimmigrant visas in a timely manner included inadequate consular staffing and other staffing-related issues as well as inadequate computer systems, facilities, and other equipment; (5) an increased emphasis on preventing the entry of illegal immigrants, terrorists, and other criminals also contributed to delays; (6) State has initiatives under way to address staffing problems, upgrade equipment, and identify and implement practices that could improve visa processing at overseas posts; and (7) however, it does not systematically gather data on visa processing turnaround times and has not yet set specific timeliness standards to help guide its improvement program.
You are an expert at summarizing long articles. Proceed to summarize the following text: Without proper safeguards, computer systems are vulnerable to individuals and groups with malicious intentions who can intrude and use their access to obtain and manipulate sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. The risks to federal systems are well-founded for a number of reasons, including the dramatic increase in reports of security incidents, the ease of obtaining and using hacking tools, and steady advances in the sophistication and effectiveness of attack technology. Recognizing the importance of securing federal systems and data, Congress passed FISMA in 2002. The act sets forth a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. FISMA’s framework creates a cycle of risk management activities necessary for an effective security program; these activities are similar to the principles noted in our study of the risk management activities of leading private-sector organizations—assessing risk, establishing a central management focal point, implementing appropriate policies and procedures, promoting awareness, and monitoring and evaluating policy and control effectiveness. In order to ensure the implementation of this framework, the act assigns specific responsibilities to agency heads, chief information officers, inspectors general, and NIST. It also assigns responsibilities to OMB that include developing and overseeing the implementation of policies, principles, standards, and guidelines on information security, and reviewing agency information security programs, at least annually, and approving or disapproving them. FISMA requires each agency, including agencies with national security systems, to develop, document, and implement an agencywide information security program to provide security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. Specifically, FISMA requires information security programs to include, among other things: periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems; risk-based policies and procedures that cost-effectively reduce information security risks to an acceptable level and ensure that information security is addressed throughout the life cycle of each information system; subordinate plans for providing adequate information security for networks, facilities, and systems or groups of information systems, as appropriate; security awareness training for agency personnel, including contractors and other users of information systems that support the operations and assets of the agency; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in the information security policies, procedures, and practices of the agency; procedures for detecting, reporting, and responding to security incidents; plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. In addition, agencies must produce an annually updated inventory of major information systems (including major national security systems) operated by the agency or under its control, which includes an identification of the interfaces between each system and all other systems or networks, including those not operated by or under the control of the agency. FISMA also requires each agency to report annually to OMB, selected congressional committees, and the Comptroller General on the adequacy of its information security policies, procedures, practices, and compliance with requirements. In addition, agency heads are required to report annually the results of their independent evaluations to OMB, except to the extent that an evaluation pertains to a national security system; then only a summary and assessment of that portion of the evaluation needs to be reported to OMB. Under FISMA, NIST is tasked with developing, for systems other than national security systems, standards and guidelines that must include, at a minimum (1) standards to be used by all agencies to categorize all their information and information systems based on the objectives of providing appropriate levels of information security, according to a range of risk levels; (2) guidelines recommending the types of information and information systems to be included in each category; and (3) minimum information security requirements for information and information systems in each category. NIST must also develop a definition of and guidelines for detection and handling of information security incidents as well as guidelines developed in conjunction with the Department of Defense and the National Security Agency for identifying an information system as a national security system. The law also assigns other information security functions to NIST, including: providing technical assistance to agencies on elements such as compliance with the standards and guidelines and the detection and handling of information security incidents; evaluating private-sector information security policies and practices and commercially available information technologies to assess potential application by agencies; evaluating security policies and practices developed for national security systems to assess their potential application by agencies; and conducting research, as needed, to determine the nature and extent of information security vulnerabilities and techniques for providing cost- effective information security. As required by FISMA, NIST has prepared its annual public report on activities undertaken in the previous year and planned for the coming year. In addition, NIST’s FISMA initiative supports the development of a program for credentialing public and private sector organizations to provide security assessment services for federal agencies. Under FISMA, the inspector general for each agency shall perform an independent annual evaluation of the agency’s information security program and practices. The evaluation should include testing of the effectiveness of information security policies, procedures, and practices of a representative subset of agency systems. In addition, the evaluation must include an assessment of the compliance with the act and any related information security policies, procedures, standards, and guidelines. For agencies without an inspector general, evaluations of non-national security systems must be performed by an independent external auditor. Evaluations related to national security systems are to be performed by an entity designated by the agency head. FISMA states that the Director of OMB shall oversee agency information security policies and practices, including: developing and overseeing the implementation of policies, principles, standards, and guidelines on information security; requiring agencies to identify and provide information security protections commensurate with risk and magnitude of the harm resulting from the unauthorized access, use, disclosure, disruption, modification, or destruction of information collected or maintained by or on behalf of an agency, or information systems used or operated by an agency, or by a contractor of an agency, or other organization on behalf of an agency; overseeing agency compliance with FISMA to enforce accountability; and reviewing at least annually, and approving or disapproving, agency information security programs. In addition, the act requires that OMB report to Congress no later than March 1 of each year on agency compliance with FISMA. Significant weaknesses in information security policies and practices threaten the confidentiality, integrity, and availability of critical information and information systems used to support the operations, assets, and personnel of most federal agencies. These persistent weaknesses expose sensitive data to significant risk, as illustrated by recent incidents at various agencies. Further, our work and reviews by inspectors general note significant information security control deficiencies that place a broad array of federal operations and assets at risk. Consequently, we have made hundreds of recommendations to agencies to address these security control deficiencies. Since our report in July 2007, federal agencies have reported a spate of security incidents that have put sensitive data at risk, thereby exposing the personal information of millions of Americans to the loss of privacy and potential harm associated with identity theft. Agencies have experienced a wide range of incidents involving data loss or theft, computer intrusions, and privacy breaches, underscoring the need for improved security practices. The following examples, reported in 2008 and 2009, illustrate that a broad array of federal information and assets remain at risk. In May 2009, the Department of Transportation Inspector General issued the results of an audit of Web applications security and intrusion detection in air traffic control systems at the Federal Aviation Administration (FAA). The inspector general reported that Web applications used in supporting air traffic control systems operations were not properly secured to prevent attacks or unauthorized access. To illustrate, vulnerabilities found in Web application computers associated with the Traffic Flow Management Infrastructure System, Juneau Aviation Weather System, and the Albuquerque Air Traffic Control Tower allowed audit staff to gain unauthorized access to data stored on these computers, including program source code and sensitive personally identifiable information. In addition, the inspector general reported that it found a vulnerability on FAA Web applications that could allow attackers to execute malicious codes on FAA users’ computers, which was similar to an actual incident that occurred in August 2008. In February 2009, the FAA notified employees that an agency computer had been illegally accessed and employee personal identity information had been stolen electronically. Two of the 48 files on the breached computer server contained personal information about more than 45,000 FAA employees and retirees who were on the FAA payrolls as of the first week of February 2006. Law enforcement agencies were notified and are investigating the data theft. In March 2009, U.S. Congressman Jason Altmire and U.S. Senator Bob Casey announced that that they had sent a letter to the Under Secretary of Defense for Acquisition, Technology, and Logistics, asking for additional information on a recent security breach of the presidential helicopter, Marine One. According to the announcement, in February 2009, a company based in Cranberry, Pennsylvania, discovered that engineering and communications documents containing key details about the Marine One fleet had been downloaded to an Internet Protocol (IP) address in Iran. The documents were traced back to a defense contractor in Maryland, where an employee most likely downloaded a file-sharing program that inadvertently allowed others to access this information. According to information from the Congressman’s Web site, recent reports have said that the federal government was warned last June that an Internet Web site with an IP address traced to Iran was actively seeking this information. In March 2009, the United States Computer Emergency Readiness Team (US-CERT) issued an updated notice to warn agencies and organizations of the Conficker/Downadup worm activity and to help prevent further compromises from occurring. In the notice, US-CERT warned that the Conficker/Downadup worm could infect a Microsoft Windows system from a thumb drive, a network share, or directly across a network if the host is not patched. According to a March 2009 media release from Senator Bill Nelson’s office, cyber-invaders thought to be in China hacked into the computer network in Senator Nelson’s office. There were two attacks on the same day in March 2009, and another one in February 2009 that targeted work stations used by three of Senator Nelson’s staffers. The hackers were not able to take any classified information because that information is not kept on office computers, a spokesman said. The media release stated that similar incursions into computer networks in Congress were up significantly in the past few months. The Department of Energy’s Office of Health, Safety, and Security announced that a password-protected compact disk (CD) had been lost during a routine shipment on January 28, 2009. The CD contained personally identifiable information for 59,617 individuals who currently work or formerly worked at facilities at the Department of Energy’s Idaho site. The investigation verified that protection measures had been applied in accordance with requirements applicable to organizations working under cooperative agreements and surmised that while the CD had been lost for 8 weeks at the time of the investigation, no evidence had been found that revealed that the personal information on the lost disk had been compromised. The investigation concluded that OMB and Department of Energy requirements for managing and reporting the loss of the information had not been transmitted to the appropriate organizations and that there was a failure to provide timely notifications of the actual or suspected loss of information in this incident. In January 2009, the Program Director of the Office of Personnel and Management’s USAJOBS Web site announced that their technology provider’s (Monster.com) database had been illegally accessed and contact and account data had been taken, including user IDs and passwords, e-mail addresses, names, phone numbers, and some basic demographic data. The director pointed out that e-mail could be used for phishing activity and advised users to change their site login password. In December 2008, the Federal Emergency Management Administration was alerted to an unauthorized breach of private information when an applicant notified it that his personal information pertaining to Hurricane Katrina had been posted on the Internet. The information posted to Web sites contained a spreadsheet with 16,857 lines of data that included applicant names, social security numbers, addresses, telephone numbers, e-mail addresses, and other information on disaster applicants who had evacuated to Texas. According to the Federal Emergency Management Administration, it took action to work with the Web site hosting the private information, and have that information removed from public view. Additionally, the agency reported that it worked to remove the same information from a second Web site. Further, the agency stated that while it believed most of the applicant information posted on the Web sites were properly released by them to a state agency, it did not authorize the subsequent public posting of much of this data. In June 2008, the Walter Reed Army Medical Center reported that officials were investigating the possible disclosure of personally identifiable information through unauthorized sharing of a data file containing the names of approximately 1,000 Military Health System beneficiaries. Walter Reed officials were notified of the possible exposure on May 21 by an outside company. Preliminary results of an ongoing investigation identified a computer from which the data had apparently been compromised. Data security personnel from Walter Reed and the Department of the Army think it is possible that individuals named in the file could become victims of identity theft. The compromised data file did not include protected health information such as medical records, diagnosis, or prognosis for patients. In March 2008, media reports surfaced noting that the passport files of three U.S. senators, who were also presidential candidates, had been improperly accessed by Department of State employees and contractor staff. As of April 2008, the system contained records on about 192 million passports for about 127 million passport holders. These records included personally identifiable information, such as the applicant’s name, gender, social security number, date and place of birth, and passport number. In July 2008, after investigating this incident, the Department of State’s Office of Inspector General reported many control weaknesses—including a general lack of policies, procedures, guidance, and training—relating to the prevention and detection of unauthorized access to passport and applicant information and the subsequent response and disciplinary processes when a potential unauthorized access is substantiated. When incidents occur, agencies are to notify the federal information security incident center—US-CERT. As shown in figure 1, the number of incidents reported by federal agencies to US-CERT has risen dramatically over the past 3 years, increasing from 5,503 incidents reported in fiscal year 2006 to 16,843 incidents in fiscal year 2008 (slightly more than 200 percent). Agencies report the following types of incidents based on US-CERT- defined categories: Unauthorized access: Gaining logical or physical access without permission to a federal agency’s network, system, application, data, or other resource. Denial of service: Preventing or impairing the normal authorized functionality of networks, systems, or applications by exhausting resources. This activity includes being the victim of or participating in a denial of service attack. Malicious code: Installing malicious software (e.g., virus, worm, Trojan horse, or other code-based malicious entity) that infects an operating system or application. Agencies are not required to report malicious logic that has been successfully quarantined by antivirus software. Improper usage: Violating acceptable computing use policies. Scans/probes/attempted access: Accessing or identifying a federal agency computer, open ports, protocols, service, or any combination of these for later exploit. This activity does not directly result in a compromise or denial of service. Under investigation: Investigating unconfirmed incidents that are potentially malicious, or anomalous activity deemed by the reporting entity to warrant further review. As noted in figure 2, the three most prevalent types of incidents reported to US-CERT during fiscal years 2006 through 2008 were unauthorized access, improper usage, and investigation (see fig. 2). Reviews at federal agencies continue to highlight deficiencies in their implementation of security policies and procedures. In their fiscal year 2008 performance and accountability reports, 20 of the 24 agencies indicated that inadequate information security controls were either a material weakness or a significant deficiency (see fig. 3). Similarly, in annual reports required under 31 U.S.C. § 3512 (commonly referred to as the Federal Managers’ Financial Integrity Act of 1982), 11 of 24 agencies identified material weaknesses in information security. Inspectors general have also noted weaknesses in information security, with 22 of 24 identifying it as a “major management challenge” for their agency. Similarly, our audits have identified control deficiencies in both financial and nonfinancial systems, including vulnerabilities in critical federal systems. For example: In 2009, we reported that security weaknesses at the Securities and Exchange Commission continued to jeopardize the confidentiality, integrity, and availability of the commission’s financial and sensitive information and information systems. Although the commission had made progress in correcting previously reported information security control weaknesses, it had not completed action to correct 16 weaknesses. In addition, we identified 23 new weaknesses in controls intended to restrict access to data and systems. Thus, the commission had not fully implemented effective controls to prevent, limit, or detect unauthorized access to computing resources. For example, it had not always (1) consistently enforced strong controls for identifying and authenticating users, (2) sufficiently restricted user access to systems, (3) encrypted network services, (4) audited and monitored security-relevant events for its databases, and (5) physically protected its computer resources. The Securities and Exchange Commission also had not consistently ensured appropriate segregation of incompatible duties or adequately managed the configuration of its financial information systems. As a result, the Securities and Exchange Commission was at increased risk of unauthorized access to and disclosure, modification, or destruction of its financial information, as well as inadvertent or deliberate disruption of its financial systems, operations, and services. The Securities and Exchange Commission agreed with our recommendations and stated that it plans to address the identified weaknesses. In 2009, we reported that the Internal Revenue Service had made progress toward correcting prior information security weaknesses, but continued to have weaknesses that could jeopardize the confidentiality, integrity, and availability of financial and sensitive taxpayer information. These deficiencies included some related to controls that are intended to prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities, as well as a control important in mitigating software vulnerability risks. For example, the agency continued to, among other things, allow sensitive information, including IDs and passwords for mission-critical applications, to be readily available to any user on its internal network and to grant excessive access to individuals who do not need it. In addition, the Internal Revenue Service had systems running unsupported software that could not be patched against known vulnerabilities. Until those weaknesses are corrected, the Internal Revenue Service remains vulnerable to insider threats and is at increased risk of unauthorized access to and disclosure, modification, or destruction of financial and taxpayer information, as well as inadvertent or deliberate disruption of system operations and services. The IRS agreed to develop a plan addressing each of our recommendations. In 2008, we reported that although the Los Alamos National Laboratory— one of the nation’s weapons laboratories—implemented measures to enhance the information security of its unclassified network, vulnerabilities continued to exist in several critical areas, including (1) identifying and authenticating users of the network, (2) encrypting sensitive information, (3) monitoring and auditing compliance with security policies, (4) controlling and documenting changes to a computer system’s hardware and software, and (5) restricting physical access to computing resources. As a result, sensitive information on the network— including unclassified controlled nuclear information, naval nuclear propulsion information, export control information, and personally identifiable information—were exposed to an unnecessary risk of compromise. Moreover, the risk was heightened because about 300 (or 44 percent) of 688 foreign nationals who had access to the unclassified network as of May 2008 were from countries classified as sensitive by the Department of Energy, such as China, India, and Russia. While the organization did not specifically comment on our recommendations, it agreed with the conclusions. In 2008, we reported that the Tennessee Valley Authority had not fully implemented appropriate security practices to secure the control systems used to operate its critical infrastructures at facilities we reviewed. Multiple weaknesses within the Tennessee Valley Authority corporate network left it vulnerable to potential compromise of the confidentiality, integrity, and availability of network devices and the information transmitted by the network. For example, almost all of the workstations and servers that we examined on the corporate network lacked key security patches or had inadequate security settings. Furthermore, Tennessee Valley Authority had not adequately secured its control system networks and devices on these networks, leaving the control systems vulnerable to disruption by unauthorized individuals. In addition, we reported that the network interconnections provided opportunities for weaknesses on one network to potentially affect systems on other networks. Specifically, weaknesses in the separation of network segments could allow an individual who had gained access to a computing device connected to a less secure portion of the network to be able to compromise systems in a more secure portion of the network, such as the control systems. As a result, Tennessee Valley Authority’s control systems were at increased risk of unauthorized modification or disruption by both internal and external threats and could affect its ability to properly generate and deliver electricity. The Tennessee Valley Authority agreed with our recommendations and provided information on steps it was taking to implement them. In 2007, we reported that the Department of Homeland Security had significant weaknesses in computer security controls surrounding the information systems used to support its U.S. Visitor and Immigrant Status Technology (US-VISIT) program for border security. For example, it had not implemented controls to effectively prevent, limit, and detect access to computer networks, systems, and information. Specifically, it had not (1) adequately identified and authenticated users in systems supporting US-VISIT; (2) sufficiently limited access to US-VISIT information and information systems; (3) ensured that controls adequately protected external and internal network boundaries; (4) effectively implemented physical security at several locations; (5) consistently encrypted sensitive data traversing the communication network; and (6) provided adequate logging or user accountability for the mainframe, workstations, or servers. In addition, it had not always ensured that responsibilities for systems development and system production had been sufficiently segregated and had not consistently maintained secure configurations on the application servers and workstations at a key data center and ports of entry. As a result, increased risk existed that unauthorized individuals could read, copy, delete, add, and modify sensitive information—including personally identifiable information—and disrupt service on Customs and Border Protection systems supporting the US-VISIT program. The department stated that it directed Customs and Border Protection to complete remediation activities to address each of our recommendations. According to our reports and those of agency inspectors general, persistent weaknesses appear in the five major categories of information system controls: (1) access controls, which ensure that only authorized individuals can read, alter, or delete data; (2) configuration management controls, which provide assurance that only authorized software programs are implemented; (3) segregation of duties, which reduces the risk that one individual can independently perform inappropriate actions without detection; (4) continuity of operations planning, which provides for the prevention of significant disruptions of computer-dependent operations; and (5) an agencywide information security program, which provides the framework for ensuring that risks are understood and that effective controls are selected and properly implemented. Most agencies continue to have weaknesses in each of these categories, as shown in figure 4. Agencies use access controls to limit, prevent, or detect inappropriate access to computer resources (data, equipment, and facilities), thereby protecting them from unauthorized use, modification, disclosure, and loss. Such controls include both electronic and physical controls. Electronic access controls include those related to boundary protection, user identification and authentication, authorization, cryptography, and auditing and monitoring. Physical access controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. These controls involve restricting physical access to computer resources, usually by limiting access to the buildings and rooms in which they are housed and enforcing usage restrictions and implementation guidance for portable and mobile devices. At least 23 major federal agencies had access control weaknesses during fiscal year 2008. An analysis of our reports reveals that 48 percent of information security control weaknesses pertained to access controls (see fig. 5). For example, agencies did not consistently (1) establish sufficient boundary protection mechanisms; (2) identify and authenticate users to prevent unauthorized access; (3) enforce the principle of least privilege to ensure that authorized access was necessary and appropriate; (4) apply encryption to protect sensitive data on networks and portable devices; (5) log, audit, and monitor security-relevant events; and (6) establish effective controls to restrict physical access to information assets. Without adequate access controls in place, agencies cannot ensure that their information resources are protected from intentional or unintentional harm. Boundary protection controls logical connectivity into and out of networks and controls connectivity to and from network connected devices. Agencies segregate the parts of their networks that are publicly accessible by placing these components in subnetworks with separate physical interfaces and preventing public access to their internal networks. Unnecessary connectivity to an agency’s network increases not only the number of access paths that must be managed and the complexity of the task, but the risk of unauthorized access in a shared environment. In addition to deploying a series of security technologies at multiple layers, deploying diverse technologies at different layers helps to mitigate the risk of successful cyber attacks. For example, multiple firewalls can be deployed to prevent both outsiders and trusted insiders from gaining unauthorized access to systems, and intrusion detection technologies can be deployed to defend against attacks from the Internet. Agencies continue to demonstrate vulnerabilities in establishing appropriate boundary protections. For example, two agencies that we assessed did not adequately secure channels to connect remote users, increasing the risk that attackers will use these channels to gain access to restricted network resources. One of these agencies also did not have adequate intrusion detection capabilities, while the other allowed users of one network to connect to another, higher-security network. Such weaknesses in boundary protections impair an agency’s ability to deflect and detect attacks quickly and protect sensitive information and networks. A computer system must be able to identify and authenticate different users so that activities on the system can be linked to specific individuals. When an organization assigns unique user accounts to specific users, the system is able to distinguish one user from another—a process called identification. The system also must establish the validity of a user’s claimed identity by requesting some kind of information, such as a password, that is known only by the user—a process known as authentication. Agencies did not always adequately control user accounts and passwords to ensure that only valid users could access systems and information. In our 2007 FISMA report, we noted several weaknesses in agencies’ identification and authentication procedures. Agencies continue to experience similar weaknesses in fiscal years 2008 and 2009. For example, certain agencies did not adequately enforce strong password settings, increasing the likelihood that accounts could be compromised and used by unauthorized individuals to gain access to sensitive information. In other instances, agencies did not enforce periodic changing of passwords or use of one-time passwords or passcodes, and transmitted or stored passwords in clear text. Poor password management increases the risk that unauthorized users could guess or read valid passwords to devices and use the compromised devices for an indefinite period of time. Authorization is the process of granting or denying access rights and permissions to a protected resource, such as a network, a system, an application, a function, or a file. A key component of granting or denying access rights is the concept of least privilege, which is a basic principle for securing computer resources and information and means that users are granted only those access rights and permissions that they need to perform their official duties. To restrict legitimate users’ access to only those programs and files that they need to do their work, agencies establish access rights and permissions. “User rights” are allowable actions that can be assigned to users or to groups of users. File and directory permissions are rules that regulate which users can access a particular file or directory and the extent of that access. To avoid unintentionally authorizing users access to sensitive files and directories, an agency must give careful consideration to its assignment of rights and permissions. Agencies continued to grant rights and permissions that allowed more access than users needed to perform their jobs. Inspectors general at 12 agencies reported instances where users had been granted excessive privileges. In our reviews, we also noted vulnerabilities in this area. For example, at one agency, users could inappropriately escalate their access privileges to run commands on a powerful system account, many had unnecessary and inappropriate access to databases, and other accounts allowed excessive privileges and permissions. Another agency allowed (on financial applications) generic, shared accounts that included the ability to create, delete, and modify users’ accounts. Approximately 1,100 users at yet another agency had access to mainframe system management utilities, although such access was not necessarily required to perform their jobs. These utilities provided access to all files stored on disk; all programs running on the system, including the outputs; and the ability to alter hardware configurations supporting the production environment. We uncovered one agency that had provided a contractor with system access that was beyond what was needed, making the agency vulnerable to incidents on the contractor’s network. Another agency gave all users of an application full access to the application’s source code although their responsibilities did not require this level of privilege. Such weaknesses in authorization place agencies at increased risk of inappropriate access to data and sensitive system programs, as well as to the consequent disruption of services. Cryptography underlies many of the mechanisms used to enforce the confidentiality and integrity of critical and sensitive information. A basic element of cryptography is encryption. Encryption can be used to provide basic data confidentiality and integrity by transforming plain text into cipher text using a special value known as a key and a mathematical process known as an algorithm. The National Security Agency recommends disabling protocols that do not encrypt information transmitted across the network, such as user identification and password combinations. Agencies did not always encrypt sensitive information on their systems or traversing the network. In our reviews of agencies’ information security, we found that agencies did not always encrypt sensitive information. For example, five agencies that we reviewed did not effectively use cryptographic controls to protect sensitive resources. Specifically, one agency allowed unencrypted protocols to be used on its network devices. Another agency did not require encrypted passwords for network logins, while another did not consistently provide approved, secure transmission of data over its network. These weaknesses could allow an attacker, or malicious user, to view information and use that knowledge to obtain sensitive financial and system data being transmitted over the network. To establish individual accountability, monitor compliance with security policies, and investigate security violations, it is crucial to determine what, when, and by whom specific actions have been taken on a system. Agencies accomplish this by implementing system or security software that provides an audit trail, or logs of system activity, that they can use to determine the source of a transaction or attempted transaction and to monitor users’ activities. The way in which agencies configure system or security software determines the nature and extent of the information that can be provided by the audit trail. To be effective, agencies should configure their software to collect and maintain audit trails that are sufficient to track security-relevant events. Agencies did not sufficiently log and monitor key security- and audit- related events on their network. For example, agencies did not monitor critical portions of their networks for intrusions; record successful, unauthorized access attempts; log certain changes to data on a mainframe (which increases the risk of compromised security controls or disrupted operations); and capture all authentication methods and logins to a network by foreign nationals. Similarly, 14 agencies did not always have adequate auditing and monitoring capabilities. For example, one agency did not conduct a baseline assessment of an important network. This baseline determines a typical state or pattern of network activity. Without this information, the agency could have difficulty detecting and investigating anomalous activity to ascertain whether or not an attack was under way. Another agency did not perform source code scanning or have a process for manual source code reviews, which increases the risk that vulnerabilities would not be detected. As a result, unauthorized access could go undetected, and if a system is modified or disrupted, the ability to trace or recreate events could be impeded. Physical security controls help protect computer facilities and resources from espionage, sabotage, damage, and theft. These controls restrict physical access to sensitive computing and communications resources, usually by limiting access to the buildings and rooms in which the resources are housed. Examples of physical security controls include perimeter fencing, surveillance cameras, security guards, locks, and procedures for granting or denying individuals physical access to computing resources. Physical controls also include environmental controls such as smoke detectors, fire alarms, extinguishers, and uninterruptible power supplies. Considerations for perimeter security also include controlling vehicular and pedestrian traffic. In addition, visitors’ access to sensitive areas must be managed appropriately. Our analysis of inspector general, GAO, and agency reports has shown that nine agencies did not sufficiently restrict physical access to sensitive computing and communication resources. The physical security measures employed by these agencies often did not comply with their own requirements or with federal standards. Access to facilities containing sensitive equipment and information was not always adequately restricted. For example, at one agency with buildings housing classified networks, cars were not stopped and inspected; a sign indicated the building’s purpose; fencing was scalable; and access to buildings containing computer network equipment was not controlled by electronic or other means. Agencies did not adequately manage visitors, in one instance, placing network jacks in an area where unescorted individuals could use them to obtain electronic access to restricted computing resources, and in another failing to properly identify and control visitors at a facility containing sensitive equipment. Agencies did not always remove employees’ physical access authorizations to sensitive areas in a timely manner when they departed or their work no longer required such access. Environmental controls at one agency did not meet federal guidelines, with fire suppression capabilities, emergency lighting, and backup power all needing improvements. Such weaknesses in physical access controls increase the risk that sensitive computing resources will inadvertently or deliberately be misused, damaged, or destroyed. Configuration management controls ensure that only authorized and fully tested software is placed in operation. These controls, which also limit and monitor access to powerful programs and sensitive files associated with computer operations, are important in providing reasonable assurance that access controls are not compromised and that the system will not be impaired. These policies, procedures, and techniques help ensure that all programs and program modifications are properly authorized, tested, and approved. Further, patch management is an important element in mitigating the risks associated with software vulnerabilities. Up-to-date patch installation could help mitigate vulnerabilities associated with flaws in software code that could be exploited to cause significant damage— including the loss of control of entire systems—thereby enabling malicious individuals to read, modify, or delete sensitive information or disrupt operations. Twenty-one agencies demonstrated weaknesses in configuration management controls. For instance, several agencies did not implement common secure configuration policies across their systems, increasing the risk of avoidable security vulnerabilities. In addition, agencies did not effectively ensure that system software changes had been properly authorized, documented, and tested, which increases the risk that unapproved changes could occur without detection and that such changes could disrupt a system’s operations or compromise its integrity. Agencies did not always monitor system configurations to prevent extraneous services and other vulnerabilities from remaining undetected and jeopardizing operations. At least six agencies did not consistently update software on a timely basis to protect against known vulnerabilities or did not fully test patches before applying them. Without a consistent approach to updating, patching, and testing software, agencies are at increased risk of exposing critical and sensitive data to unauthorized and possibly undetected access. Segregation of duties refers to the policies, procedures, and organizational structure that helps ensure that one individual cannot independently control all key aspects of a process or computer-related operation and thereby conduct unauthorized actions or gain unauthorized access to assets or records. Proper segregation of duties is achieved by dividing responsibilities among two or more individuals or groups. Dividing duties among individuals or groups diminishes the likelihood that errors and wrongful acts will go undetected because the activities of one individual or group will serve as a check on the activities of the other. At least 14 agencies did not appropriately segregate information technology duties. These agencies generally did not assign employee duties and responsibilities in a manner that segregated incompatible functions among individuals or groups of individuals. For instance, at one agency, an individual who enters an applicant’s data into a financial system also had the ability to hire the applicant. At another agency, 76 system users had the ability to create and approve purchase orders. Without adequate segregation of duties, there is an increased risk that erroneous or fraudulent actions can occur, improper program changes can be implemented, and computer resources can be damaged or destroyed. An agency must take steps to ensure that it is adequately prepared to cope with the loss of operational capabilities due to an act of nature, fire, accident, sabotage, or any other disruption. An essential element in preparing for such a catastrophe is an up-to-date, detailed, and fully tested continuity of operations plan. Such a plan should cover all key computer operations and should include planning to ensure that critical information systems, operations, and data such as financial processing and related records can be properly restored if an emergency or a disaster occurs. To ensure that the plan is complete and fully understood by all key staff, it should be tested— including unannounced tests—and test plans and results documented to provide a basis for improvement. If continuity of operations controls are inadequate, even relatively minor interruptions could result in lost or incorrectly processed data, which could cause financial losses, expensive recovery efforts, and inaccurate or incomplete mission-critical information. Although agencies have reported increases in the number of systems for which contingency plans have been tested, at least 17 agencies had shortcomings in their continuity of operations plans. For example, one agency’s disaster recovery planning had not been completed. Specifically, disaster recovery plans for three components of the agency were in draft form and had not been tested. Another agency did not include a business impact analysis in the contingency plan control, which would assist in planning for system recovery. In another example, supporting documentation for some of the functional tests at the agency did not adequately support testing results for verifying readability of backup tapes retrieved during the tests. Until agencies complete actions to address these weaknesses, they are at risk of not being able to appropriately recover systems in a timely manner from certain service disruptions. An underlying cause for information security weaknesses identified at federal agencies is that they have not yet fully or effectively implemented agencywide information security programs. An agencywide security program, as required by FISMA, provides a framework and continuing cycle of activity for assessing and managing risk, developing and implementing security policies and procedures, promoting security awareness and training, monitoring the adequacy of the entity’s computer- related controls through security tests and evaluations, and implementing remedial actions as appropriate. Without a well-designed program, security controls may be inadequate; responsibilities may be unclear, misunderstood, and improperly implemented; and controls may be inconsistently applied. Such conditions may lead to insufficient protection of sensitive or critical resources. Twenty-three agencies had not fully or effectively implemented agencywide information security programs. Agencies often did not adequately design or effectively implement policies for elements key to an information security program. Weaknesses in agency information security program activities, such as risk assessments, information security policies and procedures, security planning, security training, system testing and evaluation, and remedial action plans are described next. In order for agencies to determine what security controls are needed to protect their information resources, they must first identify and assess their information security risks. Moreover, by increasing awareness of risks, these assessments can generate support for policies and controls. Agencies have not fully implemented their risk assessment processes. In addition, 14 major agencies had weaknesses in their risk assessments. Furthermore, they did not always properly assess the impact level of their systems or evaluate potential risks for the systems we reviewed. For example, one agency had not yet finalized and approved its guidance for completing risk assessments. In another example, the agency had not properly categorized the risk to its system, because it had performed a risk assessment without an inventory of interconnections to other systems. Similarly, another agency had not completed risk assessments for its critical systems and had not assigned impact levels. In another instance, an agency had current risk assessments that documented residual risk assessed and potential threats, and recommended corrective actions for reducing or eliminating the vulnerabilities they had identified. However, that agency had not identified many of the vulnerabilities we found and had not subsequently assessed the risks associated with them. As a result of these weaknesses, agencies may be implementing inadequate or inappropriate security controls that do not address the systems’ true risk, and potential risks to these systems may not be known. According to FISMA, each federal agency’s information security program must include policies and procedures that are based on risk assessments that cost-effectively reduce information security risks to an acceptable level and ensure that information security is addressed throughout the life cycle of each agency’s information system. The term ‘security policy’ refers to specific security rules set up by the senior management of an agency to create a computer security program, establish its goals, and assign responsibilities. Because policy is written at a broad level, agencies also develop standards, guidelines, and procedures that offer managers, users, and others a clear approach to implementing policy and meeting organizational goals. Thirteen agencies had weaknesses in their information security policies and procedures. For example, one agency did not have updated policies and procedures for configuring operating systems to ensure they provide the necessary detail for controlling and logging changes. Another agency had not established adequate policies or procedures to implement and maintain an effective departmentwide information security program or to address key OMB privacy requirements. Agencies also exhibited weaknesses in policies concerning security requirements for laptops, user access privileges, security incidents, certification and accreditation, and physical security. As a result, agencies have reduced assurance that their systems and the information they contain are sufficiently protected. Without policies and procedures that are based on risk assessments, agencies may not be able to cost-effectively reduce information security risks to an acceptable level and ensure that information security is addressed throughout the life cycle of each agency’s information system. FISMA requires each federal agency to develop plans for providing adequate information security for networks, facilities, and systems or groups of systems. According to NIST 800-18, system security planning is an important activity that supports the system development life cycle and should be updated as system events trigger the need for revision in order to accurately reflect the most current state of the system. The system security plan provides a summary of the security requirements for the information system and describes the security controls in place or planned for meeting those requirements. NIST guidance also indicates that all security plans should be reviewed and updated, if appropriate, at least annually. Further, appendix III of OMB Circular A-130 requires security plans to include controls for, among other things, contingency planning and system interconnections. System security plans were incomplete or out of date at several agencies. For example, one agency had an incomplete security plan for a key application. Another agency had only developed a system security plan that covered two of the six facilities we reviewed, and the plan was incomplete and not up-to-date. At another agency, 52 of the 57 interconnection security agreements listed in the security plan were not current since they had not been updated within 3 years. Without adequate security plans in place, agencies cannot be sure that they have the appropriate controls in place to protect key systems and critical information. Users of information resources can be one of the weakest links in an agency’s ability to secure its systems and networks. Therefore, an important component of an agency’s information security program is providing the required training so that users understand system security risks and their own role in implementing related policies and controls to mitigate those risks. Several agencies had not ensured that all information security employees and contractors, including those who have significant information security responsibilities, had received sufficient training. For example, users of one agency’s IT systems had not been trained to check for continued functioning of their encryption software after installation. At another agency, officials stated that several of its components had difficulty in identifying and tracking all employees who have significant IT security responsibilities and thus were unable to ensure that they received the specialized training necessary to effectively perform their responsibilities. Without adequate training, users may not understand system security risks and their own role in implementing related policies and controls to mitigate those risks. Another key element of an information security program is testing and evaluating system controls to ensure that they are appropriate, effective, and comply with policies. FISMA requires that agencies test and evaluate the information security controls of their major systems and that the frequency of such tests be based on risk, but occur no less than annually. NIST requires agencies to ensure that the appropriate officials are assigned roles and responsibilities for testing and evaluating controls over their systems. Agencies did not always implement policies and procedures for performing periodic testing and evaluation of their information security controls. For example, one agency had not adequately tested security controls. Specifically, the tests of a major application and the mainframe did not identify or discuss the vulnerabilities that we had identified during our audit. The same agency’s testing did not reveal problems with the mainframe that could allow unauthorized users to read, copy, change, delete, and modify data. In addition, although testing requirements were stated in test documentation, the breadth and depth of the test, as well as the results of the test, had not always been documented. Also, agencies reported inconsistent testing of security controls among components. Without conducting the appropriate tests and evaluations, agencies have limited assurance that policies and controls are appropriate and working as intended. Additionally, there is an increased risk that undetected vulnerabilities could be exploited to allow unauthorized access to sensitive information. Remedial Action Processes and Plans FISMA requires that agencies’ information security programs include a process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in the information security policies, procedures, and practices of the agency. Since our 2007 FISMA report, we have continued to find weaknesses in agencies’ plans and processes for remedial actions. Agencies indicated that they had corrected or mitigated weaknesses; however, our work revealed that those weaknesses still existed. In addition, the inspectors general at 14 of the 24 agencies reported weaknesses in the plans to document remedial actions. For example, at several agencies, the inspector general reported that weaknesses had been identified but not documented in the remediation plans. Inspectors general further reported that agency plans did not include all relevant information in accordance with OMB instructions. We also found that deficiencies had not been corrected in a timely manner. Without a mature process and effective remediation plans, the risk increases that vulnerabilities in agencies’ systems will not be mitigated in an effective and timely manner. Until agencies effectively and fully implement agencywide information security programs, federal data and systems will not be adequately safeguarded to prevent disruption, unauthorized use, disclosure, and modification. Further, until agencies implement our recommendations to correct specific information security control weaknesses, their systems and information will remain at increased risk of attack or compromise. In prior reports, we and inspectors general have made hundreds of recommendations to agencies for actions necessary to resolve prior significant control deficiencies and information security program shortfalls. For example, we recommended that agencies correct specific information security deficiencies related to user identification and authentication, authorization, boundary protections, cryptography, audit and monitoring, physical security, configuration management, segregation of duties, and continuity of operations planning. We have also recommended that agencies fully implement comprehensive, agencywide information security programs by correcting weaknesses in risk assessments, information security policies and procedures, security planning, security training, system tests and evaluations, and remedial actions. The effective implementation of these recommendations will strengthen the security posture at these agencies. Agencies have implemented or are in the process of implementing many of our recommendations. In March 2009, we reported on 12 key improvements suggested by a panel of experts as being essential to improving our national cyber security posture (see app. III). The expert panel included former federal officials, academics, and private-sector executives. Their suggested improvements are intended to address many of the information security vulnerabilities facing both private and public organizations, including federal agencies. Among these improvements are recommendations to develop a national strategy that clearly articulates strategic objectives, goals, and priorities and to establish a governance structure for strategy implementation. Due to increasing cyber security threats, the federal government has initiated several efforts to protect federal information and information systems. Recognizing the need for common solutions to improving security, the White House, OMB, and federal agencies have launched or continued several governmentwide initiatives that are intended to enhance information security at federal agencies. These key initiatives are discussed here. 60-day cyber review: The National Security Council and Homeland Security Council recently completed a 60-day interagency review intended to develop a strategic framework to ensure that federal cyber security initiatives are appropriately integrated, resourced, and coordinated with Congress and the private sector. The resulting report recommended, among other things, appointing an official in the White House to coordinate the nation’s cybersecurity policies and activities, creating a new national cybersecurity strategy, and developing a framework for cyber research and development. Comprehensive National Cybersecurity Initiative: In January 2008, President Bush began to implement a series of initiatives aimed primarily at improving the Department of Homeland Security and other federal agencies’ efforts to protect against intrusion attempts and anticipate future threats. While these initiatives have not been made public, the Director of National Intelligence stated that they include defensive, offensive, research and development, and counterintelligence efforts, as well as a project to improve public/private partnerships. The Information Systems Security Line of Business: The goal of this initiative, led by OMB, is to improve the level of information systems security across government agencies and reduce costs by sharing common processes and functions for managing information systems security. Several agencies have been designated as service providers for IT security awareness training and FISMA reporting. Federal Desktop Core Configuration: For this initiative, OMB directed agencies that have Windows XP deployed and plan to upgrade to Windows Vista operating systems to adopt the security configurations developed by the National Institute of Standards and Technology, Department of Defense, and Department of Homeland Security. The goal of this initiative is to improve information security and reduce overall IT operating costs. SmartBUY: This program, led by the General Services Administration, is to support enterprise-level software management through the aggregate buying of commercial software governmentwide in an effort to achieve cost savings through volume discounts. The SmartBUY initiative was expanded to include commercial off-the-shelf encryption software and to permit all federal agencies to participate in the program. The initiative is to also include licenses for information assurance. Trusted Internet Connections Initiative: This effort, directed by OMB and led by the Department of Homeland Security, is designed to optimize individual agency network services into a common solution for the federal government. The initiative is to facilitate the reduction of external connections, including Internet points of presence, to a target of 50. We currently have ongoing work that addresses the status, planning, and implementation efforts of several of these initiatives. Federal agencies reported increased compliance in implementing key information security control activities for fiscal year 2008; however, inspectors general at several agencies noted shortcomings with agencies’ implementation of information security requirements. OMB also reported that agencies’ were increasingly performing key activities. Specifically, agencies reported increases in the number and percentage of systems that had been certified and accredited, the number and percentage of employees and contractors receiving security awareness training, and the number and percentage of systems with tested contingency plans. However, the number and percentage of systems that had been tested and evaluated at least annually decreased slightly and the number and percentage of employees who had significant security responsibilities and had received specialized training decreased significantly (see fig. 6). Consistent with previous years, inspectors general continued to identify weaknesses with the processes and practices agencies have in place to implement FISMA requirements. Although OMB took steps to clarify its reporting instructions to agencies for preparing fiscal year 2008 reports, the instructions did not request inspectors general to report on agencies’ effectiveness of key activities and did not always provide clear guidance to inspectors general. Federal agencies rely on their employees to protect the confidentiality, integrity, and availability of the information in their systems. It is critical for system users to understand their security roles and responsibilities and to be adequately trained to perform them. FISMA requires agencies to provide security awareness training to personnel, including contractors and other users of information systems that support agency operations and assets. This training should explain information security risks associated with their activities and their responsibilities in complying with agency policies and procedures designed to reduce these risks. In addition, agencies are required to provide appropriate training on information security to personnel who have significant security responsibilities. Agencies reported a slight increase in the percentage of employees and contractors who received security awareness training. According to agency reports, 89 percent of total employees and contractors had received security awareness training in 2008 compared to 84 percent of employees and contractors in 2007. While this change marks an improvement between fiscal years 2007 and 2008, the percentage of employees and contractors receiving security awareness training is still below the 91 percent reported for 2006. In addition, seven inspectors general reported disagreement with the percentage of employees and contractors receiving security awareness training reported by their agencies. Additionally, several inspectors general reported specific weaknesses related to security awareness training at their agencies; for example, one inspector general reported that the agency lacked the ability to document and track which system users had received awareness training, while another inspector general reported that training did not cover the recommended topics. Governmentwide, agencies reported a lower percentage of employees who had significant security responsibilities who had received specialized training. In fiscal year 2008, 76 percent of these employees had received specialized training compared with 90 percent of these employees in fiscal year 2007. Although the governmentwide percentage decreased, the majority of the 24 agencies reported increasing or unchanging percentages of employees receiving specialized training; 8 of the 24 agencies reported percentage decreases (see fig. 7). At least 12 inspectors general reported weaknesses related to specialized security training. One of the inspectors general reported that some groups did not have a training program for personnel who have critical IT responsibilities and another inspector general reported that the agency was unable to effectively track contractors who needed specialized training. Decreases in the number of individuals receiving specialized training at some federal agencies combined with continuing deficiencies in training programs could limit the ability of agencies to implement security measures effectively. Providing for the confidentiality, integrity, and availability of information in today’s highly networked environment is not an easy or trivial task. The task is made that much more difficult if each person who owns, uses, relies on, or manages information and information systems does not know or is not properly trained to carry out his or her specific responsibilities. An increasing number of inspectors general reported conducting annual independent evaluations in accordance with professional standards and provided additional information about the effectiveness of their agency’s security programs. FISMA requires agency inspectors general or their independent external auditors to perform an independent evaluation of the information security programs and practices of the agency to determine the effectiveness of the programs and practices. We have previously reported that the annual inspector general independent evaluations lacked a common approach and that the scope and methodology of the evaluations varied across agencies. We noted that there was an opportunity to improve these evaluations by conducting them in accordance with audit standards or a common approach and framework. In fiscal year 2008, 16 of 24 inspectors general cited using professional standards to perform the annual FISMA evaluations, up from 8 inspectors general who cited using standards the previous year. Of the 16 inspectors general, 13 reported performing evaluations that were in accordance with generally accepted government auditing standards, while the other 3 indicated using the “Quality Standards for Inspections” issued by the President’s Council on Integrity and Efficiency. The remaining eight inspectors general cited using internally developed standards or did not indicate whether they had performed their evaluations in accordance with professional standards. In addition, an increasing number of inspectors general provided supplemental information about their agency’s information security policies and practices. To illustrate, 21 of 24 inspectors general reported additional information about the effectiveness of their agency’s security controls and programs that was above and beyond what was requested in the OMB template, an increase from the 18 who had provided such additional information in their fiscal year 2007 reports. The additional information included descriptions of significant control deficiencies and weaknesses in security processes that provided additional context to the agency’s security posture. Although inspectors general reported using professional standards more frequently, their annual independent evaluations occasionally lacked consistency. For example, Three inspectors general provided only template responses and did not identify the scope and methodology of their evaluation. (These three inspectors general were also among those who had not reported performing their evaluation in accordance with professional standards.) Descriptions of the controls evaluated during the review as documented in the scope and methodology sections differed. For example, according to their FISMA reports, a number of inspectors general stated that their evaluations included a review of policies and procedures, whereas others did not indicate whether policies and procedures had been reviewed. Additionally, multiple inspectors general also indicated that technical vulnerability assessments had been conducted as part of the review, whereas others did not indicate whether such an assessment had been part of the review. Eleven inspectors general indicated that their FISMA evaluations considered the results of previous information security reviews, whereas 13 inspectors general did not indicate whether they considered other information security work, if any. The development and use of a common framework or adherence to auditing standards could provide improved effectiveness, increased efficiency, quality control, and consistency in inspector general assessments. Although OMB has supported several governmentwide initiatives and provided additional guidance to help improve information security at agencies, opportunities remain for it to improve its annual reporting and oversight of agency information security programs. FISMA specifies that OMB, among other responsibilities, is to develop policies, principles, standards, and guidelines on information security and report to Congress not later than March 1 of each year on agencies’ implementation of FISMA. Each year, OMB provides instructions to federal agencies and their inspectors general for preparing their FISMA reports and then summarizes the information provided by the agencies and the inspectors general in its report to Congress. Over the past 4 years, we have reported that, while the periodic reporting of performance measures for FISMA requirements and related analysis provides valuable information on the status and progress of agency efforts to implement effective security management programs, shortcomings in OMB’s reporting instructions limited the utility of the annual reports. Accordingly, we recommended that OMB improve reporting by clarifying reporting instructions; develop additional metrics that measure control effectiveness; request inspectors general to assess the quality of additional information security processes such as system test and evaluation, risk categorization, security awareness training, and incident reporting; and require agencies to report on additional key security activities such as patch management. Although OMB has taken some actions to enhance its reporting instructions, it has not implemented most of the recommendations, and thus further actions need to be taken to fully address them. In addition to the previously reported shortcomings, OMB’s reporting instructions for fiscal year 2008 did not sufficiently address several processes key to implementing an agencywide security program and were sometimes unclear. For example, the reporting instructions did not request inspectors general to provide information on the quality or effectiveness of agencies’ processes for developing and maintaining inventories, providing specialized security training, and monitoring contractors. For these activities, inspectors general were requested to report only on the extent to which agencies had implemented the activity but not on the effectiveness of those activities. Providing information on the effectiveness of the processes used to implement the activities could further enhance the usefulness of the data for management and oversight purposes. OMB’s guidance to inspectors general for rating agencies’ certification and accreditation processes was not clear. In its reporting instructions, OMB requests inspectors general to rate their agency’s certification and accreditation process using the terms “excellent,” “good,” “satisfactory,” “poor,” or “failing.” However, the reporting instructions do not define or identify criteria for determining the level of performance for each rating. OMB also requests inspectors general to identify the aspect(s) of the certification and accreditation process they included or considered in rating the quality of their agency’s process. Examples OMB included were security plan, system impact level, system test and evaluation, security control testing, incident handling, security awareness training, and security configurations (including patch management). While this information is helpful and provides insight on the scope of the rating, inspectors general were not requested to comment on the quality or effectiveness of these items. Additionally, not all inspectors general considered the same aspects in reviewing the certification and accreditation process, yet all were allowed to provide the same rating. Without clear guidelines for rating these processes, OMB and Congress may not have a consistent basis for comparing the progress of an agency over time or against other agencies. In its report to Congress for fiscal year 2008, OMB did not fully summarize the findings from the inspectors general independent evaluations or identify significant deficiencies in agencies’ information security practices. FISMA requires OMB to provide a summary of the findings of agencies’ independent evaluations and significant deficiencies in agencies’ information security practices. Inspectors general often document their findings and significant information security control deficiencies in reports that support their evaluations. However, OMB did not summarize and present this information in its annual report to Congress. Most of the inspectors general information summarized in the annual report was taken from the “yes” or “no” responses or from questions having a predetermined range of percentages as stipulated by OMB’s reporting template. Thus, important information about the implementation of agency information security programs and the vulnerabilities and risks associated with federal information systems was not provided to Congress in OMB’s annual report. This information could be useful in determining whether agencies are effectively implementing information security policies, procedures, and practices. As a result, Congress may not be fully informed about the state of federal information security. OMB also did not approve or disapprove agencies’ information security programs. FISMA requires OMB to review agencies’ information security programs at least annually and approve or disapprove them. OMB representatives informed us that they review agencies’ FISMA reports and interact with agencies whenever an issue arises that requires their oversight. However, representatives stated that they do not explicitly or publicly declare that an agency’s information security program has been approved or disapproved. As a result, a mechanism for establishing accountability and holding agencies accountable for implementing effective programs was not used. Weaknesses in information security controls continue to threaten the confidentiality, integrity, and availability of the sensitive data maintained by federal agencies. These weaknesses, including those for access controls, configuration management, and segregation of duties, leave federal agency systems and information vulnerable to external as well as internal threats. The White House, OMB, and federal agencies have initiated actions intended to enhance information security at federal agencies. However, until agencies fully and effectively implement information security programs and address the hundreds of recommendations that we and agency inspectors general have made, federal systems will remain at an increased and unnecessary risk of attack or compromise. Despite these weaknesses, federal agencies have continued to report progress in implementing key information security requirements. While NIST, inspectors general, and OMB have all made progress toward fulfilling their statutory requirements, the current reporting process does not produce information to accurately gauge the effectiveness of federal information security activities. OMB’s annual reporting instructions did not cover key security activities and were not always clear. Finally, OMB did not include key information about findings and significant deficiencies identified by inspectors general in its governmentwide report to Congress and did not approve or disapprove agency information security programs. Shortcomings in reporting and oversight can result in insufficient information being provided to Congress and diminish its ability to monitor and assist federal agencies in improving the state of federal information security. We recommend that the Director of the Office of Management and Budget take the following four actions: Update annual reporting instructions to request inspectors general to report on the effectiveness of agencies’ processes for developing inventories, monitoring contractor operations, and providing specialized security training. Clarify and enhance reporting instructions to inspectors general for certification and accreditation evaluations by providing them with guidance on the requirements for each rating category. Include in OMB’s report to Congress, a summary of the findings from the annual independent evaluations and significant deficiencies in information security practices. Approve or disapprove agency information security programs after review. In written comments on a draft of this report, the Federal Chief Information Officer (CIO) generally agreed with our overall assessment of information security at the agencies. He also identified actions that OMB is taking to clarify its reporting guidance and to consider more effective security performance metrics. These actions are consistent with the intent of two of our recommendations, that OMB clarify and enhance reporting instructions and request inspectors general to report on additional measures of effectiveness. The Federal CIO did not address our recommendation to include a summary of the findings and significant security deficiencies in its report to Congress and did not concur with GAO’s conclusion that OMB does not approve or disapprove agencies’ information security management programs on an annual basis. He indicated that OMB reviews all agency and IG FISMA reports annually; reviews quarterly information on the major agencies’ security programs; and uses this information, and other reporting, to evaluate agencies security programs. The Federal CIO advised that concerns are communicated directly to the agencies. We acknowledge that these are important oversight activities. However, as we reported, OMB did not demonstrate that it approved or disapproved agency information security programs, as required by FISMA. Consequently, a mechanism for holding agencies accountable for implementing effective programs is not being effectively used. We are sending copies of this report to the Office of Management and Budget and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-6244 or by e-mail at [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. In accordance with the Federal Information Security Management Act of 2002 (FISMA) requirement that the Comptroller General report periodically to Congress, our objectives were to evaluate (1) the adequacy and effectiveness of agencies’ information security policies and practices and (2) federal agency implementation of FISMA requirements. To assess the adequacy and effectiveness of agency information security policies and practices, we analyzed our related reports issued from May 2007 through April 2009. We also reviewed and analyzed the information security work and products of agency inspectors general. Further, we reviewed and summarized weaknesses identified in our reports and that of inspectors general using five major categories of information security controls: (1) access controls, (2) configuration management controls, (3) segregation of duties, (4) continuity of operations planning, and (5) agencywide information security programs. Our reports generally used the methodology contained in the Federal Information System Controls Audit Manual. We also examined information provided by the U.S. Computer Emergency Readiness Team (US-CERT) on reported security incidents. To assess the implementation of FISMA requirements, we reviewed and analyzed the provisions of the act and the mandated annual FISMA reports from the Office of Management and Budget (OMB), the National Institute of Standards and Technology (NIST), and the CIOs and IGs of 24 major federal agencies for fiscal years 2007 and 2008. We also examined OMB’s FISMA reporting instructions and other OMB and NIST guidance. We also held discussions with OMB representatives and agency officials from the National Institute of Standards and Technology and the Department of Homeland Security’s US-CERT to further assess the implementation of FISMA requirements. We did not verify the accuracy of the agencies’ responses; however, we reviewed supporting documentation that agencies provided to corroborate information provided in their responses. We did not include systems categorized as national security systems in our review, nor did we review the adequacy or effectiveness of the security policies and practices for those systems. We conducted this performance audit from December 2008 to May 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In March 2009, we convened a panel of experts to discuss how to improve key aspects of the national cyber security strategy and its implementation as well as other critical aspects of the strategy, including areas for improvement. The experts, who included former federal officials, academics, and private-sector executives, highlighted 12 key improvements that are, in their view, essential to improving the strategy and our national cyber security posture. These improvements are in large part consistent with our previously mentioned reports and extensive research and experience in this area. In addition to the individual named above, Charles Vrabel (Assistant Director); Debra Conner; Larry Crosland; Sharhonda Deloach; Neil Doherty; Kristi Dorsey; Rosanna Guererro; Nancy Glover; Rebecca Eyler; Mary Marshall; and Jayne Wilson made key contributions to this report. Cybersecurity: Continued Federal Efforts Are Needed to Protect Critical Systems and Information. GAO-09-835T. Washington, D.C.: June 25, 2009. Privacy and Security: Food and Drug Administration Faces Challenges in Establishing Protections for Its Postmarket Risk Analysis System. GAO-09-355. Washington, D.C.: June 1, 2009. Aviation Security: TSA Has Completed Key Activities Associated with Implementing Secure Flight, but Additional Actions Are Needed to Mitigate Risks. GAO-09-292. Washington, D.C.: May 13, 2009. Information Security: Cyber Threats and Vulnerabilities Place Federal Systems at Risk. GAO-09-661T. Washington, D.C.: May 5, 2009. Freedom of Information Act: DHS Has Taken Steps to Enhance Its Program, but Opportunities Exist to Improve Efficiency and Cost- Effectiveness. GAO-09-260. Washington, D.C.: March 20, 2009. Information Security: Securities and Exchange Commission Needs to Consistently Implement Effective Controls. GAO-09-203. Washington, D.C.: March 16, 2009. National Cyber Security Strategy: Key Improvements Are Needed to Strengthen the Nation’s Posture. GAO-09-432T. Washington, D.C.: March 10, 2009. Information Security: Further Actions Needed to Address Risks to Bank Secrecy Act Data. GAO-09-195. Washington, D.C.: January 30, 2009. Information Security: Continued Efforts Needed to Address Significant Weaknesses at IRS. GAO-09-136. Washington, D.C.: January 9, 2009. Nuclear Security: Los Alamos National Laboratory Faces Challenges in Sustaining Physical and Cyber Security Improvements. GAO-08-1180T. Washington, D.C.: September 25, 2008. Critical Infrastructure Protection: DHS Needs to Better Address Its Cyber Security Responsibilities. GAO-08-1157T. Washington, D.C.: September 16, 2008. Critical Infrastructure Protection: DHS Needs to Fully Address Lessons Learned from Its First Cyber Storm Exercise. GAO-08-825. Washington, D.C.: September 9, 2008. Information Security: Actions Needed to Better Protect Los Alamos National Laboratory’s Unclassified Computer Network. GAO-08-1001. Washington, D.C.: September 9, 2008. Cyber Analysis and Warning: DHS Faces Challenges in Establishing a Comprehensive National Capability. GAO-08-588. Washington, D.C.: July 31, 2008. Information Security: Federal Agency Efforts to Encrypt Sensitive Information Are Under Way, but Work Remains. GAO-08-525. Washington, D.C.: June 27, 2008. Information Security: FDIC Sustains Progress but Needs to Improve Configuration Management of Key Financial Systems. GAO-08-564. Washington, D.C.: May 30, 2008. Information Security: TVA Needs to Address Weaknesses in Control Systems and Networks. GAO-08-526. Washington, D.C.: May 21, 2008. Information Security: TVA Needs to Enhance Security of Critical Infrastructure Control Systems and Networks. GAO-08-775T. Washington, D.C.: May 21, 2008. Information Security: Progress Reported, but Weaknesses at Federal Agencies Persist. GAO-08-571T. Washington, D.C.: March 12, 2008. Information Security: Securities and Exchange Commission Needs to Continue to Improve Its Program. GAO-08-280. Washington, D.C.: February 29, 2008. Information Security: Although Progress Reported, Federal Agencies Need to Resolve Significant Deficiencies. GAO-08-496T. Washington, D.C.: February 14, 2008. Information Security: Protecting Personally Identifiable Information. GAO-08-343. Washington, D.C.: January 25, 2008. Information Security: IRS Needs to Address Pervasive Weaknesses. GAO-08-211. Washington, D.C.: January 8, 2008. Veterans Affairs: Sustained Management Commitment and Oversight Are Essential to Completing Information Technology Realignment and Strengthening Information Security. GAO-07-1264T. Washington, D.C.: September 26, 2007. Critical Infrastructure Protection: Multiple Efforts to Secure Control Systems Are Under Way, but Challenges Remain. GAO-07-1036. Washington, D.C.: September 10, 2007. Information Security: Sustained Management Commitment and Oversight Are Vital to Resolving Long-standing Weaknesses at the Department of Veterans Affairs. GAO-07-1019. Washington, D.C.: September 7, 2007. Information Security: Selected Departments Need to Address Challenges in Implementing Statutory Requirements. GAO-07-528. Washington, D.C.: August 31, 2007. Information Security: Despite Reported Progress, Federal Agencies Need to Address Persistent Weaknesses. GAO-07-837. Washington, D.C.: July 27, 2007. Information Security: Homeland Security Needs to Immediately Address Significant Weaknesses in Systems Supporting the US-VISIT Program. GAO-07-870. Washington, D.C.: July 13, 2007. Information Security: Homeland Security Needs to Enhance Effectiveness of Its Program. GAO-07-1003T. Washington, D.C.: June 20, 2007. Information Security: Agencies Report Progress, but Sensitive Data Remain at Risk. GAO-07-935T. Washington, D.C.: June 7, 2007. Information Security: Federal Deposit Insurance Corporation Needs to Sustain Progress Improving Its Program. GAO-07-351. Washington, D.C.: May 18, 2007.
For many years, GAO has reported that weaknesses in information security are a widespread problem that can have serious consequences--such as intrusions by malicious users, compromised networks, and the theft of intellectual property and personally identifiable information--and has identified information security as a governmentwide high-risk issue since 1997. Concerned by reports of significant vulnerabilities in federal computer systems, Congress passed the Federal Information Security Management Act of 2002 (FISMA), which authorized and strengthened information security program, evaluation, and reporting requirements for federal agencies. In accordance with the FISMA requirement that the Comptroller General report periodically to Congress, GAO's objectives were to evaluate (1) the adequacy and effectiveness of agencies' information security policies and practices and (2) federal agencies' implementation of FISMA requirements. To address these objectives, GAO analyzed agency, inspectors general, Office of Management and Budget (OMB), and GAO reports. Persistent weaknesses in information security policies and practices continue to threaten the confidentiality, integrity, and availability of critical information and information systems used to support the operations, assets, and personnel of most federal agencies. Recently reported incidents at federal agencies have placed sensitive data at risk, including the theft, loss, or improper disclosure of personally identifiable information of Americans, thereby exposing them to loss of privacy and identity theft. For fiscal year 2008, almost all 24 major federal agencies had weaknesses in information security controls. An underlying reason for these weaknesses is that agencies have not fully implemented their information security programs. As a result, agencies have limited assurance that controls are in place and operating as intended to protect their information resources, thereby leaving them vulnerable to attack or compromise. In prior reports, GAO has made hundreds of recommendations to agencies for actions necessary to resolve prior significant control deficiencies and information security program shortfalls. Federal agencies reported increased compliance in implementing key information security control activities for fiscal year 2008; however, inspectors general at several agencies noted shortcomings with agencies' implementation of information security requirements. Agencies reported increased implementation of control activities, such as providing awareness training for employees and testing system contingency plans. However, agencies reported decreased levels of testing security controls and training for employees who have significant security responsibilities. In addition, inspectors general at several agencies disagreed with performance reported by their agencies and identified weaknesses in the processes used to implement these activities. Further, although OMB took steps to clarify its reporting instructions to agencies for preparing fiscal year 2008 reports, the instructions did not request inspectors general to report on agencies' effectiveness of key activities and did not always provide clear guidance to inspectors general. As a result, the reporting may not adequately reflect agencies' implementation of the required information security policies and procedures.
You are an expert at summarizing long articles. Proceed to summarize the following text: Among the federal statutes that affect the reuse process, four are of particular importance: (1) the base realignment and closure acts of 1988 and 1990, (2) the Federal Property and Administrative Services Act of 1949, and (3) the 1987 Stewart B. McKinney Homeless Assistance Act (McKinney Act). Amendments to these acts enacted within the past year are leading to ongoing changes in reuse planning and implementation at closing bases. The Defense Authorization Amendments and Base Closure and Realignment Act and the Defense Base Closure and Realignment Act of 1990—collectively referred to as the Base Realignment and Closure (BRAC) acts—are the two statutes that authorize the Secretary of Defense to close military bases and dispose of property. Title XXIX of the National Defense Authorization Act for Fiscal Year 1994 amended the BRAC Acts to enable local redevelopment authorities to receive government property at no initial cost if the property is used for economic development. The Federal Property and Administrative Services Act of 1949 requires disposal agencies to provide DOD and other federal agencies an opportunity to request property to satisfy a programmed requirement. Property may be conveyed at no cost under various public benefit discount programs, sold for not less than the appraised fair market value through negotiated sale to state governments or their instrumentalities, or sold at a competitive public sale. Surplus property can be made available to providers of services to the homeless as provided for by the McKinney Act. At the time of our 1994 report, the McKinney Act assigned such providers higher priority than local communities when conflicts over reuse planning for surplus property at military bases occurred. However, the Base Closure Community Redevelopment and Homeless Assistance Act of 1994 amended the BRAC acts and the McKinney Act to incorporate homeless assistance requests into the community reuse planning process and to eliminate the higher priority given to requests for property at bases undergoing realignment and closure. The information contained in this report reflects the June 1995 status of property disposal plans at 37 of the 120 installations closed by the 1988 and 1991 closure commissions (see fig. 1). About three fifths of the property at the 37 closing military bases will be retained by the federal government because it is contaminated with unexploded ordnance, has been retained by decisions made by the BRAC commissions or by legislation, or is needed by federal agencies. The remaining two fifths of the property is available for conversion to community reuse. Communities’ plans for this property involve a variety of public benefit and economic development uses. Little property is planned for negotiated sale to state and local jurisdictions or for public sale, as shown in figure 2. (See app. I for a summary of property disposal plans.) Public benefit conveyances (37,268 acres) Economic development conveyances (23,633 acres) 6% Undetermined (12,110 acres) 4% Public sale (6,849 acres) Mandatory retention by federal agencies (22,154 acres) While the federal government plans to retain about 58 percent of the property at closing bases, only 17 percent has been requested to satisfy federal agency needs. About 29 percent is contaminated with unexploded ordnance and will be retained by the federal government because the cost of cleanup and environmental damage that would be caused by cleanup are excessive. Another 12 percent of the property has been either retained per either BRAC decisions or legislation. An example of property retained per a BRAC decision would be the 100-acre parcel at Fort Benjamin Harrison, Indiana, for the Defense Finance and Accounting Service facility. An example of property retained by legislation would be the 1,480-acre Presidio of San Francisco, California, which was transferred to the National Park Service. Of the 58 percent, the Department of Interior’s Fish and Wildlife Service and Bureau of Land Management are to receive about 42 percent of the property. Much of the property is contaminated with unexploded ordnance. DOD will retain about 13 percent to support Reserve, National Guard, Defense Finance and Accounting Service facilities, and other active duty missions. Other federal agencies will receive about 3 percent of the property for such uses as federal prisons and national parks. (See app. II for a summary of federal uses.) Communities also are planning to use about 20 percent of the base property for various public benefits. The largest public benefit use is for commercial airport conversions, which will total about 14 percent under current plans. About 4 percent is to go to park and recreation use, the second largest public benefit use. Plans call for transferring another 2 percent of the property to such public benefit uses as education, homeless assistance, and state prisons. Communities are planning to acquire about 12 percent of the property under economic development conveyances, and DOD plans to sell about 4 percent of the property either through negotiated sales to state and local jurisdictions or through direct sales to the public. Communities have not determined how the remaining 6 percent of the property should be incorporated into their reuse plans. Land sales for all BRAC closures totaled $138.8 million as of June 1995. The sale of 641 acres of developed land at Norton Air Force Base, California, to the local redevelopment authority for $52 million under an economic development conveyance is the largest sale to date. The 1989 sale of the Kapalama Military Reservation, Hawaii, to the state of Hawaii for $38.5 million is the next largest sale. When we last reported, land sales totaled $69.4 million. The largest increase in sales has been to local reuse authorities under the new economic development conveyance authority, which allows for no-cash downpayment terms and up to 15 years to pay. Overall, progress is being made in converting properties at the closing bases we reviewed to civilian use. Communities are creating new airport facilities, jobs, education and job training centers, and wildlife habitats. (See app. III for a more detailed discussion of each installation’s conversion progress.) Converting military airfields to civilian airports is a goal at most communities that have bases with closing airfields. For example, the city of Austin, Texas, is converting Bergstrom Air Force Base’s airfield and facilities into a new municipal airport. The Federal Aviation Administration has provided over $110 million toward the conversion. Buildings are being demolished to build an additional runway, while design work is underway on the conversion, which is scheduled for completion in 1998. DOD officials believe that one meaningful measure of base conversion success is in the number of jobs created. The 37 bases will have lost 54,217 civilian jobs when they are all closed. To date, 25 of the bases have closed. At these 25 bases, 29,229 jobs were lost. So far, 8,340 jobs have been created. (See app. IV for a summary of each community’s success at creating jobs.) Community efforts to create jobs have been a key component of economic recovery strategies in a number of locations. Successful efforts in a few communities have led to the creation of more jobs than were lost due to closures. At England Air Force Base, Louisiana, the community has attracted 16 tenants that have created over 700 jobs replacing the nearly 700 civilian jobs lost as a result of the base’s closure. The largest tenant has hired 65 employees to refurbish jet aircraft. Another large tenant has hired 58 people to operate a truck driving school. (See p.44.) At Chase Naval Air Station, Texas, newly constructed state prison facilities and several small manufacturers have created over 1,500 jobs, a net increase of 600 jobs over the level of civilian employment by the Navy. (See p. 38.) At Pease Air Force Base, New Hampshire, a commercial airport, an aircraft maintenance complex, a government agency, and a biotechnology firm are among the 41 tenants that have created over 1,000 jobs at the base, over twice the 400 civilian jobs lost. (See p. 86.) Several communities have begun developing or planning centers for higher education and job training. In some instances, these efforts have involved pooled efforts by local schools and state institutions and agencies. At Lowry Air Force Base, Colorado, a consortium of Colorado colleges and the Denver public school system are providing educational and job training opportunities. Currently, 80 classes with a total of 800 students are in session at the former base. (See p. 73.) At Fort Ord, California, classes at the new California State University, Monterey Bay, are scheduled to begin in the fall of 1995. About 700 graduate and undergraduate students are expected to enroll in the university’s fall class. (See p. 51.) The U.S. Fish and Wildlife Service plans to set aside land at several bases for preservation as natural wildlife habitats. In some locations, the preservation of wildlife habitats reduces the level of environmental cleanup, particularly where unexploded ordnance is involved. At Jefferson Proving Ground, Indiana, the Army plans to transfer about 47,500 acres to the Fish and Wildlife Service for a wildlife refuge, which could potentially save the Army billions of dollars in costs otherwise needed to remove unexploded ordnance. (See p. 63.) At Woodbridge Army Research Facility, Virginia, all 580 acres are to be transferred to the Service for inclusion in the Mason Neck Wildlife Refuge. Service plans for the property envision showcasing habitat and wildlife not routinely seen so close to a metropolitan area and providing environmental education opportunities. (See p. 109.) Early experiences indicate that a new form of conveyance authority called an economic development conveyance can be mutually beneficial to both the federal government and local communities. This new authority calls for (1) DOD to convey property to a local redevelopment authority for the purpose of creating jobs when it is not practicable to obtain fair market value at the time of the transfer and (2) DOD and the local authorities to negotiate the terms and conditions of the conveyances. In qualifying rural areas, conveyances are at no cost to the communities. This new authority benefits local redevelopment authorities by allowing them to take possession of properties with no initial payment so that they can implement their job creation and economic development plans. The federal government benefits by eliminating the costs of maintaining and protecting idle properties and by generating revenues to help pay for base realignment and closure costs. Several communities are planning to use this new conveyance mechanism to obtain property for economic development. Two economic development conveyance agreements—one at Norton Air Force Base, California, and another at Sacramento Army Depot, California—have been successfully negotiated. The local redevelopment authority and the Air Force have agreed that for a 641-acre parcel at the Norton Air Force Base, the local reuse authority will pay the government 40 percent of gross lease revenues and 100 percent of gross land sales revenues up to a total of $52 million, the estimated fair market value of the property. If the $52 million has not been paid in full at the end of 15 years, the local redevelopment authority is obligated to pay the Air Force the balance. The local redevelopment authority is negotiating or has entered into 7 leases that it projects will result in about 2,250 new jobs by next year. (See p. 83.) At the Sacramento Army Depot, the city of Sacramento has acquired 371 acres of the 487-acre depot from the Army. Under the terms of the economic development conveyance agreement, the Army will be paid $7.2 million either at the end of 10 years or when the property is sold by the city, whichever is sooner. The city has negotiated a lease with Packard Bell that creates a projected 2,500 to 3,000 jobs that nearly offset the 3,200 lost from the depot’s closure. (See p. 100.) Successful conversion of military bases to civilian uses involves various parties reaching a consensus on realistic reuse plans. But, before the plans can be implemented, necessary environmental cleanup actions must have been taken by DOD. In numerous communities, the failure to reach a consensus on reuse issues has caused delays in the development of acceptable reuse plans. At George Air Force Base, California, reuse was delayed about 2 years while lawsuits were settled between the city of Adelanto and the Victor Valley Economic Development Authority over which jurisdiction should have the reuse authority. (See p. 58.) At Tustin Marine Corps Air Station, California, homeless assistance groups are requesting about 400 family housing units and other buildings. The local reuse authority believes that 100 family housing units and some single-residence, multiple-unit buildings would provide a balanced living environment and that the request for additional facilities conflicts with other aspects of its reuse plan. At its request, the local reuse authority was granted a delay in DOD’s disposal process to give it more time to negotiate with the homeless assistance groups. Negotiations continue between the two groups to reach a consensus. (See p. 102.) At Puget Sound Naval Station (Sand Point), Washington, the city of Seattle and the Muckleshoot Indian Tribe are promoting competing reuse plans. The city plans to use the property for housing, parks and recreation, and educational activities. The tribe plans to use the property for economic development and educational activities. As long as 2 years ago, the Navy asked both parties to work on a joint reuse plan. However, no consensus on reuse has been reached by the two parties. DOD’s disposal decisions on the property are pending. (See p. 93.) Early efforts likewise indicate that even after a consensus is achieved, conversions are unlikely to prove successful if the resulting plans incorporate unrealistic reuse expectations. Some base conversions involve reuse expectations that may be unrealistic given their rural or relatively unpopulated geographic locations. Early experiences suggest that bases with airfields in remote locations pursuing reuse plans involving expanded airport operations are most prone to these types of expectations. Reuse plans for the airfields at Wurtsmith Air Force Base, Michigan, Eaker Air Force Base, Arkansas, and Loring Air Force Base, Maine, have been largely unsuccessful because the new tenants attracted are not capable of generating enough revenue to support the costs of airport operations. Environmental cleanup requirements delay the implementation of reuse plans. In February 1995, we reported that the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 prohibits transferring property to nonfederal ownership until all necessary environmental cleanup actions are taken. However, much of the property is in the early stages of cleanup. Cleanup progress has been limited because the study and evaluation process is lengthy and complex and, with existing technology, take time. The National Defense Authorization Act for Fiscal Year 1994 allowed long-term leases of property prior to cleanup but few had been signed as of January 1995. Federal agencies have provided about $368 million to the 37 selected BRAC 1988 and 1991 communities to assist with the conversion of military bases to civilian reuse. Agencies have awarded grants for such purposes as reuse planning, airport planning, and job training, as well as for infrastructure improvements and community economic development. (See app. V for a summary of the federal assistance provided to each community.) The Federal Aviation Administration has awarded the most assistance, providing $151 million to assist with converting military airfields to civilian use. DOD’s Office of Economic Adjustment has awarded $85 million to help communities plan the reuse of closed BRAC 1988 and 1991 bases. The Department of Commerce’s Economic Development Administration has awarded $85 million to assist communities with infrastructure improvements, building demolition, and revolving loan funds. The Department of Labor has awarded $46 million to help communities retrain workers adversely affected by closures. We updated information that we had obtained from 37 installations closed by the 1988 and 1991 Base Closure Commissions. These 37 bases contain 190,000 of the 250,000 acres designated for closure by the 1988 and 1991 rounds, or about 76 percent of the total. To gather the most recent reuse information and to identify any changes since our earlier report, we interviewed base transition coordinators, community representatives, and DOD officials. We obtained up-to-date federal assistance information from the Federal Aviation Administration, the Economic Development Administration, the Department of Labor, and the Office of Economic Adjustment to determine the amount and type of assistance the federal government provided to the BRAC 1988 and 1991 base closure communities. For each base, the profiles provide (1) a description of size and location; (2) important milestone dates; (3) a reuse plan summary and a golf course reuse plan, which discusses the status of reuse implementation; (4) jobs lost and created; (5) federal assistance; and (6) environmental cleanup status. The information collected represents the status of reuse planning and actions as of June 1995. We did not obtain written agency comments. However, we discussed the report’s contents with DOD officials, and their comments have been incorporated where appropriate. Our review was performed in accordance with generally accepted government auditing standards between October 1994 and June 1995. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Directors of the Defense Logistics Agency and the Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. Please contact me at (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix VI. Davisville Naval Construction Battalion Center Long Beach Naval Station/Naval Hospital Myrtle Beach Air Force Base Philadelphia Naval Station/Naval Hospital/Naval Shipyard Puget Sound Naval Station (Sand Point) Chanute Air Force Base, Ill. 8 acres Fort Benjamin Harrison, Ind. 100 acres (BRAC recommendation) Fort Ord, Calif. Lexington Army Depot, Ky. Loring Air Force Base, Maine Lowry Air Force Base, Colo. 108 acres, also houses Air Reserve Personnel Center (BRAC recommendation) Norton Air Force Base, Calif. 34 acres Fort Wingate, N. Mex. Fort Ord, Calif. 740 acres of housing and support buildings to support other nearby military bases (BRAC recommendation) Fort Sheridan, Ill. 15 acres containing Army cemetery (BRAC recommendation) Davisville Naval Construction Battalion Center, R.I. 380 acres (Camp Fogarty) Fort Benjamin Harrison, Ind. 144 acres Fort Devens, Mass. 5,177 acres (BRAC recommendation) Fort Ord, Calif. Fort Sheridan, Ill. 104 acres (BRAC recommendation) Richards-Gebaur Air Reserve Station, Mo. Rickenbacker Air Guard Base, Ohio Sacramento Army Depot, Calif. 61 acres (BRAC recommendation) Tustin Marine Corps Air Station, Calif. 10 acres (also for Air National Guard and Coast Guard) Williams Air Force Base, Ariz. Fort Sheridan, Ill. Long Beach Naval Station, Calif. 592 acres to shipyard (BRAC recommendation) Philadelphia Naval Shipyard, Pa. 550 acres to be preserved by Navy for possible use in future (BRAC recommendation) Richards-Gebaur Air Reserve Station, Mo. Warminster Naval Air Warfare Center, Pa. Sacramento Army Depot, Calif. 19 acres (BRAC recommendation) Lowry Air Force Base, Colo. 7 acres (BRAC recommendation) Moffett Naval Air Station, Calif. 130 acres of housing to support nearby military base (BRAC recommendation) Norton Air Force Base, Calif. 78 acres of housing to support nearby military base (BRAC recommendation) Bergstrom Air Force Base, Tex. 330 acres (BRAC recommendation) Grissom Air Force Base, Ind. 1,398 acres (BRAC recommendation) Pease Air Force Base, N.H. 230 acres (BRAC recommendation) Rickenbacker Air Guard Base, Ohio 203 acres (BRAC recommendation) Bureau of Land Management Fort Ord, Calif. 15,009 acres, (including 8,009 acres of unexploded ordnance) Fort Wingate, N.Mex. 8,812 acres returned to public domain (legislative requirement) Fort Devens, Mass. Jefferson Proving Ground, Ind. 47,500 acres for wildlife refuge (contains unexploded ordnance) Loring Air Force Base, Maine 6,000 acres for wildlife refuge Pease Air Force Base, N.H. 1,095 acres for wildlife Puget Sound Naval Station (Sand Point), Wash. Woodbridge Army Research Facility, Va. 580 acres for wildlife refuge (legislative requirement) Wurtsmith Air Force Base, Mich. Presidio of San Francisco, Calif. 1,480 acres (legislative requirement) Philadelphia Naval Station, Pa. Puget Sound Naval Station, (Sand Point), Wash. Williams Air Force Base, Ariz. Department of Agriculture (Forest Service) Department of Justice (Bureau of Prisons) Castle Air Force Base, Calif. 659 acres for prison Fort Devens, Mass. George Air Force Base, Calif. Department of Health and Human Services (Public Health Service) Davisville Naval Construction Battalion Center, R.I. Department of Labor (Employment and Training Administration) Fort Devens, Mass. Long Beach Naval Station, Calif. Department of Transportation (Federal Aviation Administration) Moffett Naval Air Station, Calif. 1,440 acres (BRAC recommendation) Base description: The laboratory is located on 37 acres in Watertown on the Charles River, west of Boston. Its mission has been research and development of materials and manufacturing technology testing. Closure of this industrial facility, used by the Army since 1816, avoids major renovation costs. Date of closure recommendation: 1988. Estimated date of military mission termination: September 1995. Estimated date of base closure: September 1995. Summary of reuse plan: The community reuse plan calls for 30 acres to be developed for industrial, commercial, and residential use. The remaining 7 acres, comprising the commander’s mansion and grounds, are to be a public benefit conveyance through the National Park Service for a park and historic monument. The mixed-use plan emphasizes preserving the integrity of historic buildings and landscapes and providing greater public access to the riverfront. A homeless provider expressed interest in some base property, but no application was filed. Golf course: None. Implementation status: The local reuse authority will likely request that the 30 acres planned for development be conveyed through an economic development transfer. Local officials believe an economic development transfer will give the local authority greater assurance that the property is developed in accordance with the reuse plan than if the Army sells the property directly to a private developer. There is some question whether the laboratory meets one of the criteria for such a conveyance—adverse economic impact of the closure on the region—since it is small and located in a large metropolitan area. However, the plan does emphasize the job-creation criterion for economic development transfers by calling for the creation of 1,500 new jobs. Civilian jobs lost due to closure: 540. Civilian jobs created as of 3/31/95: Base not yet closed. The Economic Development Administration grant to the city of Watertown was to provide technical assistance to determine the most practical reuse for the facilities and to do a market feasibility study. National Priorities List site: Yes. Contaminants: Radionuclides, heavy metals, petroleum, oil, solvents, pesticides, and polychlorinated biphenyls. Estimated cleanup cost: $110 million. Cleanup at the facility is moving ahead gradually, with the radiological cleanup mostly complete. Estimated date cleanup complete or remedy in place: December 1997. Base description: Bergstrom is located on 3,216 acres on the southeast outskirts of Austin. The city bought the land for the government in 1941, retaining an equitable interest. Following its activation in 1942, Bergstrom was the home of troop carrier units. From 1966 to 1992, it was under the Tactical Air Command. Base closure legislation specified that the base would be turned over to the city. Date of closure recommendation: 1991. Date of military mission termination: September 1992. Date of base closure: September 1993. Summary of reuse plan: The city of Austin passed a referendum in May 1993 to support establishment of a new municipal airport, and it has decided that Bergstrom will be used for that purpose. Approximately 2,562 acres will revert back to the city. This property, along with an additional 324 acres conveyed to the city upon closure, will be used for the new airport. The Air Force will keep 330 acres as a cantonment area for the Reserves. The conveyance to the city will include the golf course and other property that can be leased to help support airport operations. The city plans to move 60 to 70 of the base’s 719 housing units downtown where they are to be sold to low-income home buyers. The city plans to demolish most of the rest of the units to build a new runway. Golf course: The golf course is being conveyed to the city to help support airport operations. Implementation status: The transfer of property to the city is being delayed until the base cleanup is complete. Meanwhile, DOD has entered into a long-term $1 lease with the city for the base. While DOD is not getting revenue from the lease, it is saving on operation and maintenance funds since the city has assumed responsibility for base maintenance, which, according to the site manager, averages about $9 million a year. The target date for opening the airport is November 1998. Civilian jobs lost due to closure: 942. Civilian jobs created as of 3/31/95: 0. The Federal Aviation Administration grants are for demolition of existing structures, supplemental environmental studies, and construction of new airport facilities. National Priorities List site: No. Contaminants: Domestic solid wastes, pesticides, paints, paint containers, incineration wastes, construction debris, petroleum/oil/lubricants, low-level radioactive waste, synthetic oils, oil/water separator wastes, silver, soaps, degreasers, air filters, battery acids, asphalt, and lead. Estimated cleanup cost: $53.2 million. Estimated date cleanup complete or remedy in place: December 1999. Base description: Cameron Station consists of 165 acres of administrative and warehouse space as well as park land in Alexandria. The park land includes a 6-acre lake. The government first purchased the land at the start of World War II for use as a general depot. It is a subinstallation of Fort Myer. Cameron Station is one of the few bases on the closure list that DOD considers to have high market value, but asbestos removal, demolition, and infrastructure costs affect the projected revenues. Date of closure recommendation: 1988. Estimated date of military mission termination: September 1995. Estimated date of base closure: September 1995. Summary of reuse plan: The plan calls for about 64 acres, including the lake and its perimeter, to be a public benefit transfer through the Department of Interior to the city for park and recreation and easements. Two homeless assistance providers are to receive about 8 acres of the property for an 80-bed shelter and a food redistribution center. The remaining 93 acres are to be sold to a private developer who will likely demolish the buildings and construct residential, commercial, and retail facilities. Cameron Station has no housing units. Golf course: None. Implementation status: The 93 acres for development were advertised for bids in January 1995, and the winning bid of $33.2 million was awarded in May 1995. Property transfer is scheduled for May 1996 if the environmental clearances have been completed by that time. Civilian jobs lost due to closure: 4,355. Civilian jobs created as of 3/31/95: Base not yet closed. National Priorities List site: No. Contaminants: Volatile organic compounds, heavy metals, petroleum products, polychlorinated biphenyls, pesticides, and herbicides. Estimated cleanup cost: $7 million. Cleaning up groundwater contamination could take 30 years, but base officials anticipate that the property can be sold once remediation measures are in place. Estimated date cleanup complete or remedy in place: September 1995. Base description: Castle is located on 2,777 acres in the agricultural San Joaquin Valley, 6 miles from the city of Merced and 100 miles southeast of Sacramento. First activated in December 1941 to provide flight training, its primary mission since the 1950s has been B-52 and KC135 crew training. Date of closure recommendation: 1991. Date of military mission termination: October 1994. Estimated date of base closure: September 1995. Summary of reuse plan: The Federal Bureau of Prisons will receive 659 acres for prison construction. The Bureau will preserve a portion of this acreage, containing seasonal wetlands and endangered species, as a prison buffer. The plan calls for 1,581 acres to be an airport public benefit transfer. The local reuse authority hopes that attracting aviation-related businesses will be a stimulus to economic development for the area. The Federal Aviation Administration will get about 1 acre of property in conjunction with the airport. Additionally, the plan calls for a public benefit transfer of 132 acres for public school and community college programs, 18 acres for parks and recreation, and 13 for health facilities. In October 1994, the Department of Health and Human Services approved homeless assistance providers’ applications for about 8 acres of property, including 8 family housing units. The plan calls for the remaining 365 acres to be sold based on the fair market value. This acreage includes 188 acres of residential areas, which may be used for a senior citizens cooperative and starter homes for first-time home buyers. Golf course: None. Implementation status: Implementation of reuse plans, including the design and construction of the federal prison, has been delayed due to difficulties related to air quality conformity, environmental cleanup, infrastructure upgrading, and leasing. The property disposition plans were approved in January 1995. Approval of the environmental impact statement was delayed about 4 months because of air quality issues. The Navy’s plans to expand operations at nearby Lemoore Naval Air Station raised concerns about air emissions from future development and aircraft traffic at Castle. The local utility company determined that the base gas distribution system should be abandoned. The local reuse authority is negotiating with this company and the Bureau of Prisons to install a new gas line to the prison site to provide gas service to tenants that may be attracted to the base in the interim. Questions concerning upgrading or replacing other aging base utility systems are also being addressed. The local authority at Castle has been having difficulty attracting businesses that will support airport operations. Castle is competing with other closing airfields for a limited number of potential aviation-related businesses. Civilian jobs lost due to closure: 1,149. Civilian jobs created as of 3/31/95: Base not yet closed. The Economic Development Administration grants included $3.5 million to the city of Atwater to connect the base sewer system to the city’s system and $1 million to Merced County to establish a revolving loan fund to be used to induce businesses to locate at Castle by providing a source of financing. The Federal Aviation Administration grants included $115,000 for an airport feasibility study and master plan and $2,028,000 for airport facilities and equipment. National Priorities List site: Yes. Contaminants: Spent solvents, fuels, waste oils, pesticides, cyanide, and cadmium. Cleanup efforts have been hampered by delays in release of funds. Castle has ground water contamination from an underground plume of trichloroethylene and other volatile organic compounds. Estimated cleanup cost: $146 million. Estimated date cleanup complete or remedy in place: October 1996. Base description: Chanute is located on 2,132 acres adjacent to the city of Rantoul, which has annexed the base property. The base was constructed in 1917 and used initially for pilot training and as a storage depot for aircraft engines and paint. Since World War II, it has served as a training installation for aerospace and weapon system support personnel. Date of closure recommendation: 1988. Date of military mission termination: July 1993. Date of base closure: September 1993. Summary of reuse plan: The plan primarily involves developing a civilian airport and attracting aviation-related businesses, as well as other types of economic development. A no-cost airport public benefit transfer of 1,181 acres is planned once cleanup is completed. DOD will retain 8 acres for a Defense Finance and Accounting center. Additionally, 147 acres will be transferred to the local community for park and recreation use and 62 acres to the University of Illinois for a research facility. The remaining 734 acres, including the golf course and housing areas, will be sold once cleanup is completed. Golf course: The golf course was sold in March 1993 to the highest bidder for $711,502, but the deed transfer has been delayed due to questions involving environmental cleanup. Meanwhile, the purchaser is operating the course on a no-cost prevention and maintenance lease. Implementation status: While environmental cleanup is underway most of the base property is being leased. Property sales have been negotiated for some parcels, but deeds cannot be transferred until the parcels are cleaned up or remediation is satisfactorily in place. Development has also been hampered by utility system issues, such as the high cost to tenants for unmetered service from the base’s steam heat system. Despite such difficulties, the community has successfully attracted businesses that have created jobs. A base official reported that about 78 businesses have located at Chanute thus far. Since development cannot be financed on short-term leases, the city is negotiating 55- and 99-year leases, which can be converted into deed transfers when cleanup is completed. The city has also used an Economic Development Administration grant to finance building renovation and asbestos removal, and one business is paying back the renovation cost through increased rent. Civilian jobs lost due to closure: 1,035. Civilian jobs created as of 3/31/95: 1,002. Economic Development Administration grants to Rantoul provided $1 million to establish a revolving loan fund to assist businesses locating at Chanute, $400,000 for planning, and $1.1 million for a road improvement project to improve traffic access to base facilities. Federal Aviation Administration grants included $194,930 for planning, an environmental audit, and a utility survey and $742,900 for resurfacing a runway. National Priorities List site: No. Contaminants: Household and industrial waste, spent solvents, fuels, and waste oils. Estimated cleanup cost: $43.5 million. Despite repeated environmental studies and surveys, the Environmental Protection Agency has determined that more testing will be needed to determine the extent of groundwater contamination and identify remediation measures. Test wells will be drilled off the base to determine whether the contamination is occurring naturally or the result of base operations. Estimated date cleanup complete or remedy in place: September 1997. Base description: This 3,757-acre base is located 5 miles east of Beeville in southern Texas, about 60 miles northwest of Corpus Christi. The base included the main air station, a 96-acre housing tract adjacent to the town, and an auxiliary airfield in Goliad County 30 miles away. Date of closure recommendation: 1991. Date of military mission termination: October 1992. Date of base closure: February 1993. Summary of reuse plan: Under the plan, 96 acres of housing were sold to the local reuse authority, and the state received a 285-acre public benefit transfer for a state prison. Local authorities requested the remaining 3,376 acres, including the auxiliary field, as economic development conveyances. While the plan calls for using the airfield as an airport, local officials are requesting an economic development conveyance rather than an airport public benefit conveyance because they believe that an economic development conveyance will allow them more latitude in their future actions than the more restrictive airport conveyance would. Golf course: The property containing the golf course is being used to construct a state prison. Implementation status: All the property has been leased, sold, or transferred, except for three sites that have been retained by the Navy until cleanup is complete. The state prison facilities are in operation, resulting in an increase in jobs for the area. In addition, according to a base closure official, the local authority has eight or nine subleases with small businesses. In a letter to the Navy, we raised questions concerning the propriety of the negotiated sale of 396 family housing units for $168,000, which is $424 a unit, to the local authority. The units are being rented for $400 to $650 per month each. Civilian jobs lost due to closure: 914. Civilian jobs created as of 3/31/95: 1,520. The Economic Development Administration grant to the Beeville/Bee County Economic Development Authority provided funds to improve the wastewater treatment facility, roads, and housing areas. The Federal Aviation Administration grant was for developing an airport master plan. National Priorities List site: No. Contaminants: Acids, heavy metals, paints, polychlorinated biphenyls, petroleum fuels and hydrocarbons, photographic chemicals, and solvents. Estimated cleanup cost: $5.4 million. Estimated date cleanup complete or remedy in place: June 1995. Base description: The center is located on 1,280 acres on the shoreline of Narragansett Bay in North Kingstown. Between 1939 and 1942, the Navy constructed a naval air station and pier in the area. In 1974, the Navy declared the air station surplus, and operations at the center were greatly reduced. In response, the state established the Port Authority and Economic Development Corporation to develop the area as a business and industrial park, which did not meet initial expectations. Date of closure recommendation: 1991. Date of military mission termination: March 1994. Date of base closure: April 1994. Summary of reuse plan: The plan calls for 380 acres to be retained by DOD for the Army Reserves and 10 acres to be retained by the Public Health Service. The Port Authority has requested an economic development transfer of 512 acres. However, the Department of Interior, in June 1994, requested 35 of the 512 acres on behalf of the Narragansett Indian tribe. The outcome of this request is unclear even though the federal screening process for the base was completed in May 1993. The Calf Pasture Point and Allen’s Harbor shoreline will be part of a 289-acre park and recreation public benefit transfer, which will go to North Kingstown, the tribe, or a partnership of both. Included in this transfer will be the gym and the yacht club, which the town will receive. Use of the remaining 89 acres, which include open space and wetlands, is undetermined. Golf course: None. Implementation status: Although the Narragansett Indian tribe has a representative on the local reuse committee, the committee opposes the tribe’s request to obtain sovereignty over the property it is requesting. The community wants to maintain zoning and land use jurisdiction and fears that the tribe will establish a casino there as the tribe is attempting to do on its reservation 25 miles away. Base closure officials are seeking a clarification of the rights and priorities of Native Americans in the base closure property screening process. Property disposition is also awaiting the completion of the environmental impact statement and the base cleanup plan. The community is urging the Navy to provide additional assistance to demolish 160 to 170 unwanted buildings. Thus far, the Navy has agreed to demolish 17 buildings it has determined to be structurally unsafe. Civilian jobs lost due to closure: 125. Civilian jobs created as of 3/31/95: 29. National Priorities List site: Yes. Contaminants: Heavy metals, polychlorinated biphenyls, pesticides, petroleum-based hydrocarbons, and volatile organic compounds. Estimated cleanup cost: $37.9 million. Estimated date cleanup complete or remedy in place: May 1998. Base description: Eaker is located on 3,286 acres, with portions of the base lying within the towns of Blytheville and Gosnell, about 68 miles northwest of Memphis, Tennessee. The base is in an agricultural area in the Mississippi River floodplain, 11 miles west of the river. It was activated as an Army airfield in 1942, serving as an advanced flying school. It was deactivated in 1945, and control of the land was transferred to the city of Blytheville. It was reactivated in 1955 as an Air Force base and was used for Strategic Air Command refueling tankers and jet fighter trainers. Date of closure recommendation: 1991. Date of military mission termination: April 1992. Date of base closure: December 1992. Summary of reuse plan: The plan centers around developing a civilian airport and attracting aviation-related businesses to support its operations. The Air Force is conveying about 1,690 acres of base property for airport-related activities, including 192 acres that reverted to the city of Blytheville at closure. The plan also includes a public benefit transfer of 484 acres for park and recreation use, which include some archaeological sites. The Presbytery of Memphis is interested in acquiring through an educational public benefit conveyance about 65 acres for an educational program to aid underachieving students. The redevelopment authority will likely receive 1,044 acres through an economic development conveyance at no cost since the base is in a rural area. The Presbytery is interested in using about 235 of the 1,044 acres that include base housing, retail exchange and commissary buildings, and the hospital for a retirement community and convention center. A chapel on 3 acres is to be sold. Golf course: The golf course is currently being leased for an annual fee of $19,000 plus maintenance. If the local authority and the Air Force agree on an economic development conveyance for the remaining base property, the course is to be included. Otherwise, the Air Force would like to sell the course. Implementation status: Questions remain about the viability of establishing a civilian airport and attracting sufficient aviation-related businesses to support it in a rural area. Nevertheless, the local airport authority is negotiating a long-term lease for about 1,690 acres of airport facilities. The local authority hopes the long-term lease will make locating at Eaker more attractive to potential business tenants. The Air Force continues to cover caretaker and maintenance costs for those portions of the base not under lease, but it would like to terminate its caretaker operations by 1997. Civilian jobs lost due to closure: 792. Civilian jobs created as of 3/31/95: 106 (jobs related to caretaker operations). The Economic Development Administration grant to the Blytheville-Gosnell Regional Airport Authority provided funds to repair the runway, taxiway, and ramps; to install instrument landing equipment; and to upgrade the airfield lighting system. The Federal Aviation Administration grant was for developing an airport master plan. National Priorities List site: No. Contaminants: Household and industrial waste, spent solvents, fuels, waste oil, paints, pesticides, chromic acid, paint stripper, medical wastes, lead acid, and nickel/cadmium batteries. Estimated cleanup cost: $47 million. Estimated date cleanup complete or remedy in place: December 2000. Base description: England is located on 2,282 acres about 5 miles west of Alexandria in central Louisiana. Constructed as a municipal airport, the base was first leased to the Army Air Force at the onset of World War II. In 1949, the property was returned to the city, but with the outbreak of hostilities in Korea in 1950, it was acquired by the Air Force. In 1955, the Air Force began constructing permanent facilities at the base. Date of closure recommendation: 1991. Date of military mission termination: June 1992. Date of base closure: December 1992. Summary of reuse plan: The plan calls for the entire 2,282-acre base to be an airport public benefit transfer to the local England Authority. All profits from revenue-generating properties, including the golf course and family housing, are planned to support airport operations. Golf course: The golf course is included in the long-term lease and provides revenue generation to the airport. Implementation status: Local officials are optimistic that England’s aviation-centered reuse plan will be successful, predicting that the authority’s operations at England will be self-sustaining within 10 years. The reuse plan calls for moving air carrier service from a small regional airport nearby to England. The Federal Aviation Administration insisted that it would only support one airport in the area. In July 1994, local officials voted unanimously for moving air carrier service to England. The Federal Aviation Administration has since approved the England plan, and it now supports a public benefit transfer of all the property to support airport operations. A long-term lease to the England Authority for the base property was signed in March 1995, ending the Air Force’s responsibility for funding about $2 million in operations and maintenance costs. The England Authority has attracted 16 tenants to help support aviation operations at England. Two weeks a month, for 10 months a year, the Joint Readiness Training Center flies wide-bodied planes in and out with military personnel for exercises at nearby Fort Polk. However, this lease only produces five full-time jobs at England. Other tenants at England include (1) a company that refurbishes jet aircraft, which employs 65; (2) a trucking company, which operates a driver training school on base with 58 jobs; (3) an operator for the golf course; (4) the local school district, which leases an elementary school; and (5) a university conducting classes on base. A state hospital will use the base medical facility to expand charity care services. Civilian jobs lost due to closure: 697. Civilian jobs created as of 3/31/95: 718. The Economic Development Administration grants to the England Economic and Industrial Development District were to construct a concrete cargo pad, security fencing, and access control; rehabilitate runways, taxiways, approach lighting, and signage; renovate an air terminal building and a railway spur; and make access road improvements. The Federal Aviation Administration grant was for developing an airport master plan. National Priorities List site: No. Contaminants: Household and industrial waste, spent solvents, fuels, waste oil, paints, lead, pesticides, alkali, low-level radioactive waste, chlorine gas, polychlorinated biphenyls, and medical waste. Estimated cleanup cost: $42.1 million. Estimated date cleanup complete or remedy in place: December 1999. Base description: The base is located on 2,501 acres about 12 miles northeast of downtown Indianapolis, near the city of Lawrence. It has been used periodically as a training ground and an infantry garrison. It was abandoned from 1913 to 1917. In 1947, it was declared surplus, but later that same year it was returned to active status as a permanent military post. Date of closure recommendation: 1991. Estimated date of military mission termination: October 1996. Estimated date of base closure: October 1996. Summary of reuse plan: DOD will retain 144 acres for use by the Reserves. In addition, 100 acres containing the Defense Finance and Accounting Service facility will be transferred to the General Services Administration. The state will receive 1,550 acres as a public benefit transfer for a state park. Homeless assistance providers will receive 4 acres, including a building with six family housing units and a barracks. The Army plans to sell the 150-acre golf course. The plan calls for the remaining 553 acres, including the Harrison Village housing complex, to be an economic development transfer. The community hopes to attract light industry. Portions of this property have historic preservation and wetlands considerations. Golf course: The state originally requested that the golf course be included as part of the public benefit transfer for the state park, but the Army has decided to sell it. The state has made an offer for the golf course and the Army is evaluating it. Implementation status: The community submitted its reuse plan to the Army in December 1994. Although the base will not close until October 1996, most of the property will be available for reuse by October 1995. Base closure officials are hoping to conclude a master lease by that time, which will facilitate the subleasing of properties as they are cleaned up and made available. The Army and the General Services Administration are coordinating to obtain Office of Management and Budget approval for a no-cost transfer of the Defense Finance and Accounting Service facility (Building #1) from the Army to the General Services Administration. The transfer is expected to take place October 1, 1995. Civilian jobs lost due to closure: 4,240. Civilian jobs created as of 3/31/95: Base not yet closed. The Economic Development Administration grant was to the state of Indiana to plan for economic adjustment associated with the closure of the base. National Priorities List site: No. Contaminants: Petroleum products, heavy metals, volatile organic compounds, and pesticides. Estimated cleanup cost: $17.6 million. Estimated date cleanup complete or remedy in place: June 1998. Base description: Fort Devens is located on 9,311 acres near the town of Ayer, about 35 miles northwest of Boston. It was created as a temporary cantonment in 1917 for training soldiers from the New England area. In 1921, it was placed in caretaker status and used for summer National Guard and Reserves training. In 1931, it was declared a permanent installation, and it was used during World War II as a reception center for draftees. In 1946, it reverted to caretaker status, but again it became a reception center during the Korean Conflict. It has remained an active Army facility since that time. Date of closure recommendation: 1991. Estimated date of military mission termination: September 1995. Estimated date of base closure: March 1996. Summary of reuse plan: About 68 percent of the base will be retained by federal agencies. Under provisions designated by the 1991 BRAC Commission, 5,177 acres will be retained by the Army for facilities and a training area for Reserve components. The Fish and Wildlife Service will receive 890 acres for a wildlife refuge. The Bureau of Prisons will receive 245 acres for a federal prison medical facility. The Department of Labor will receive 20 acres for a Job Corps Center. Two homeless assistance applications totaling 29 acres have been approved. However, the local community may find alternative means to meet these homeless requests. The remaining 2,950 acres will be an economic development conveyance. A consortium of Indian groups has expressed interest in one parcel for a cultural center and museum, but it has not submitted a formal request. Golf course: A portion of the golf course and the adjacent hospital property will be used for construction of the federal prison medical facility. Plans call for a reconfiguration of the golf course to reestablish the full 18 holes. Implementation status: The community approved a final reuse plan in December 1994. A final decision on property disposition by the Army is expected in July 1995. An interim lease with one private company is in place. The Army and the reuse authority are negotiating a master lease/purchase agreement that mirrors the profit-sharing provisions of an economic development conveyance. It calls for property that can be sold to be sold and the remainder to be leased. The local authority would receive 60 percent and the federal government 40 percent of net revenues from subleases and sales. Civilian jobs lost due to closure: 2,178. Civilian jobs created as of 3/31/95: Base not yet closed. The Economic Development Administration grants to the state provided a $750,000-revolving loan fund and $875,000 in technical assistance for businesses locating at the base. National Priorities List site: Yes. Contaminants: Volatile organic compounds, heavy metals, petroleum products, polychlorinated biphenyls, pesticides, herbicides, and explosive compounds. Estimated cleanup cost: $49.4 million. Estimated date cleanup complete or remedy in place: March 1998. Base description: Fort Ord consists of 27,725 acres on the Monterey Peninsula by the towns of Seaside and Marina, about 80 miles south of San Francisco. About 20,000 acres of the base are undeveloped property, which were used for training exercises. Since its opening in 1917, Fort Ord has served as a training and staging facility for infantry troops. From 1947 to 1975, it was a basic training center. Date of closure recommendation: 1991. Date of military mission termination: September 1993. Date of base closure: September 1993. Summary of reuse plan: The plan calls for DOD to retain 760 acres: 740 acres of housing for military personnel remaining in the area, 12 acres for the Reserves, and 8 acres for the Defense Finance and Accounting System center. The Bureau of Land Management will receive 15,009 acres, which will be preserved from development, including 8,000 acres contaminated with unexploded ordnance. State, county, and city agencies will receive public benefit transfers of 2,605 acres for parks and recreation, including beaches and sand dunes. California State University and the University of California will receive 2,681 acres as economic development conveyance to establish university and research facilities. Included in the California State University conveyance are 1,253 family housing units. Other educational institutions will receive public benefit transfers totaling 338 acres for schools. The city of Marina will be given the airport—a public benefit transfer of 750 acres. Homeless assistance providers are to receive 84 acres, including 196 family housing units, 35 single housing units, and other buildings. The Army plans to negotiate a sale of the 404-acre parcel containing two golf courses. The disposition of the remaining 5,094 acres has not been determined, but it will likely include market sales, as well as additional public benefit transfers. Golf course: The Army’s main interest is that the revenues from the two golf courses continue to support the Morale, Welfare, and Recreation programs for military personnel remaining in the area. The Army is negotiating an agreement with the city of Seaside under which the two 18-hole golf courses will be operated by the city. The agreement will stipulate shared use by military personnel and the public. Army officials reported that the Army intends to sell the golf courses to the city. Proceeds from the sale would go to support the Morale, Welfare, and Recreation programs. Enabling legislation has been introduced. Implementation status: The transfer of property has been initiated. In July 1994, the first phase of transfers to two universities took place. The new California State University, Monterey Bay, received an initial 630 acres. The university plans to open classes for an estimated 700 students in the fall of 1995. The University of California, Santa Cruz, also received 949 acres in July 1994 to establish a research center. In November 1994, 5 schools and 93 acres were transferred to the federal sponsor, Department of Education, for deeding to the Monterey Peninsula Unified School District. Civilian jobs lost due to closure: 2,835. Civilian jobs created as of 3/31/95: 92. The Office of Economic Adjustment provided nearly $2 million in planning grants to help develop and implement the reuse plan. The Office also provided $5 million to the city of Monterey to help establish a center for international trade at Fort Ord in conjunction with the Monterey Institute for International Studies. The center plans to develop the capacity and resources for international marketing of technologies and applications from university research programs being established at Fort Ord. The Economic Development Administration provided $15 million to the new California State University, Monterey Bay, to renovate buildings for educational use and meet seismic and Americans with Disabilities Act requirements. A university official estimated that an additional $140 million would be requested from DOD over the next 10 years to complete renovations. Monterey County received $1 million to establish a revolving loan fund, and the city of Marina received $900,000 for road, water system, and sewer improvements for an interim commercial development project outside the base gate. In addition, the county and the University of California, Santa Cruz, each received $750,000 for an infrastructure, economic, and job development analysis. The university also received $1.2 million to help establish its Science, Technology, and Policy Center at the base. The Federal Aviation Administration provided $88,200 to the local reuse authority to complete an airport master plan for the reuse of the base airfield and $67,500 for an environmental assessment of airport plans. The Department of Labor provided $800,000 to fund an array of retraining and reemployment services for workers affected by Fort Ord’s closure. National Priorities List site: Yes. Contaminants: Petroleum wastes and volatile organic compounds. Estimated cleanup cost: $156.6 million. Estimated date cleanup complete or remedy in place: September 1998. Base description: Fort Sheridan is located on 712 acres of high-value suburban land on the shores of Lake Michigan between Lake Forest and Highland Park, 25 miles north of Chicago. Acquired in 1887, its major mission initially was cavalry training. More recently, the fort served as headquarters of the Nike missile antiaircraft defense systems in the midwest. Its latest mission was administration and logistical support for Army recruiting and Reserve centers in the midwest. Date of closure recommendation: 1988. Date of military mission termination: May 1993. Date of base closure: May 1993. Summary of reuse plan: The Army originally proposed exchanging 156 acres at the fort with the Equitable Life Assurance Society for about 7.1 acres of land in Arlington, Virginia, where the Army wanted to build a national Army museum. The local community supported this plan, but the Secretary of Defense rejected it as inappropriate to the base closure process. The local reuse committee has submitted a new reuse plan to the Army. The Army plans to keep 104 acres for use by the Reserves and the existing 15-acre military cemetery. The Navy acquired approximately 182 acres, consisting of 392 housing units, in January 1994 for $20 million. Three homeless assistance providers were awarded approximately 46 acres, including 106 family housing units and 36 single housing units. The Lake County Forest Preserve District has requested the open space on the shoreline, bluffs, and ravines (about 103 acres) as a public benefit transfer for park and recreation use. The Department of Education has approved two public benefit transfers, totaling 4 acres and including the library and gymnasium, for educational use. The 174-acre golf course will be sold. Disposition of the remaining 84 acres, including the historic district, is undetermined. The reuse plan foresees residential and public use for this property. Golf course: Originally, the Forest Preserve District offered to purchase the golf course along with the shoreline, bluffs, and ravines for $10 million. At that time, the Army had a request from the Department of Veteran Affairs for some of that property for a national cemetery. Therefore, the Army turned down the offer from the district. When the Veteran Affairs’ offer fell through, district officials said they could not buy the property because it failed to pass a local bond measure. Consequently, the district requested the property through a public benefit transfer. However, the Army notified the district that the golf course will be sold and opened negotiations with the district regarding sale terms. Implementation status: The Army now must decide on the public benefit transfer requests. In turn, the reuse committee must decide whether to form a local redevelopment authority and request the developable property through an economic development conveyance or negotiated sale or whether to have the Army sell the property directly to developers. Civilian jobs lost due to closure: 1,681. Civilian jobs created as of 3/31/95: 18. National Priorities List site: No. Contaminants: Volatile and semivolatile organic compounds, polynuclear aromatic hydrocarbons, thallium, and unexploded ordnance. Estimated cleanup cost: $26.9 million. Estimated date cleanup complete or remedy in place: 1997 for surplus property and 1999 for retained Navy/Army property. Base description: Fort Wingate is located on 21,812 acres in northwest New Mexico. The base is bordered by the Cibola National Forest on the south and is within 10 miles of the city of Gallup to the west, the Navajo Indian Reservation to the north, and the Zuni Indian Reservation to the southwest. Additional Navajo Reservation land lies south of the National Forest. Both tribes consider Fort Wingate to be part of their ancestral lands. The base includes sites considered sacred by the Zunis, including Fenced Up Horse Canyon, site of ancestral Anasazi ruins. The southern portion of the base is also part of the watershed for the Zuni Reservation. The depot is a subinstallation of Tooele Army Depot, and it has been used for ammunition storage. There are more than 700 concrete ammunition storage bunkers. Between 1963 and 1967, the base was used by White Sands Missile Range to fire several Pershing missiles to test the missile’s mobility and accuracy. Most of the property is undeveloped. Before the Army acquired the property, it was public domain land. As such, it reverts to the Department of Interior, Bureau of Land Management, when it is not needed by DOD. Date of closure recommendation: 1988. Date of military mission termination: January 1993. Date of base closure: January 1993. Summary of reuse plan: DOD wants to retain approximately 13,000 acres for 7 years for use by the Ballistic Missile Defense Office for missile launching activity in conjunction with the White Sands Missile Range. To retain this land, either the Army would not include that portion of the base in its relinquishment notice or the missile defense office would have to lease the land from the Bureau of Land Management. Both the Navajo and Zuni tribes oppose use of Fort Wingate for missile testing, and several federal agencies have expressed environmental and land use concerns. Any property not retained by DOD will revert to the Bureau. Once the Army cleans up the contamination at Fort Wingate, the Bureau will consult with other Department of Interior agencies concerning possible uses for the property. The Department’s Bureau of Indian Affairs has requested the entire base to hold in trust on behalf of the two tribes. The tribes want the land for preservation of sacred sites, watershed protection, economic development, and use for other tribal programs. The city of Gallup opposes the conveyance of Fort Wingate property to the Indians, and it has indicated interest in a portion of the base for economic development. The city has retained an attorney to challenge the requirement that the property be relinquished to Interior when the Army’s need for it ceases. Golf course: None. Implementation status: DOD tried to get officials from Gallup, McKinley County, and the two Indian tribes to agree on forming a reuse committee under its base closure rules and guidelines. However, Interior Department and Bureau of Land Management officials maintain that this effort was inappropriate because the property will revert to the Bureau and will be handled under the Bureau’s authorities and rules. The missile defense office competed an environmental impact study with a decision in March 1995 to proceed with the proposed missile program. Meanwhile, Interior is cooperating with DOD in facilitating a private company’s use of some of the facilities to carry out a contract with the Army to deactivate Army pyrotechnics, which will provide 25 to 30 jobs for this economically depressed area. Civilian jobs lost due to closure: 90. Civilian jobs created as of 3/31/95: Not available, property to be retained by federal agencies. Federal assistance: None. National Priorities List site: No. Contaminants: Explosive compounds, polychlorinated biphenyls, pesticides, heavy metals, asbestos, and lead-based paint. Estimated cleanup cost: $22.5 million. Estimated date cleanup complete or remedy in place: Unknown. Base description: George is located on 5,068 acres between the towns of Adelanto and Victorville in the Mojave Desert northeast of Los Angeles. The base was first activated in 1941 as a pilot training location. It was placed on standby status in 1945 and used for aircraft storage. In 1950, it was reopened after hostilities began in Korea. During the Vietnam conflict, the Air Force designated George as one of its major training bases for fighter crews, and it continued as a fighter operations and training base thereafter. Date of closure recommendation: 1988. Date of military mission termination: December 1992. Date of base closure: December 1992. Summary of reuse plan: Approximately 900 acres are to be transferred to the Bureau of Prisons for a federal prison. About 2,300 acres will be an airport public benefit transfer, and 63 acres will be conveyed under public benefit transfers for schools. Homeless assistance providers will receive 34 acres, including 64 family housing units. Initially, the Air Force designated the remaining acres, including the golf course and over 1,500 family housing units, for negotiated or public sale. However, local authorities are planning to request 1,471 acres of this property as an economic development conveyance. The Air Force will dispose of the 300-acre golf course at a public sale. Golf course: The Air Force plans to dispose of the golf course by negotiated or public sale. Implementation status: Reuse of George was delayed for 2 years due to a jurisdictional dispute over reuse authority between the city of Adelanto and the Victor Valley Economic Development Authority, which was supported by Victorville, Apple Valley, Hesperia, and the county. Another reason was differences in their reuse plans over the proposed size of the airport. The Air Force recognized the Victor Valley authority as the airport authority and leased the 2,300-acre airport to the authority. Adelanto is receiving some public benefit transfers for schools. Lawsuits between Adelanto and the authority were settled in February 1995, and the authority is proceeding with plans to attract tenants and create jobs. Under the new provisions of the Base Closure Community Redevelopment and Homeless Assistance Act of 1994, the community has until September 1995 to incorporate plans for accommodating homeless needs in its reuse plan, which must be completed before the Air Force can consider an economic development conveyance request. A chapel on 2 acres was sold to a local church for $510,000. In addition, the Air Force is transferring the land for the federal prison and negotiating the sale of the 295-acre golf course and a 3-acre parcel containing the credit union. Civilian jobs lost due to closure: 506. Civilian jobs created as of 3/31/95: 209. The Economic Development Administration grants were provided to the Victor Valley authority to improve roads, the water system, the sewer system, and the airport. The Federal Aviation Administration grant was awarded for developing an airport master plan. National Priorities List site: Yes. Contaminants: Petroleum/oils/lubricants, volatile organic compounds, and heavy metals. Estimated cleanup cost: $75.8 million. Estimated date cleanup complete or remedy in place: December 1997. Base description: Grissom is located on 2,722 acres in an agricultural area of central Indiana, about 6 miles southwest of Peru and 65 miles north of Indianapolis. The base was established in 1942 as a naval air station and was used as a training site throughout World War II. It was deactivated in 1946 and was reactivated as Bunker Hill Air Force Base in 1955. It is currently home to an Air Reserve wing whose mission is air refueling operations. Date of closure recommendation: 1991. Date of military mission termination: July 1993 (active duty mission). Date of base closure: September 1994. Summary of reuse plan: According to the plan, the Air Force will retain about 1,398 acres, including the airfield, for the Reserves and will transfer 901 acres as an economic development conveyance. The remaining 423 acres, including the 1,128 family housing units, will be sold via a public sale. A primary goal of the plan is to attract businesses and replace the jobs lost due to the closure. Golf course: The 9-hole golf course is currently under interim lease to a private operator through the local redevelopment authority. The reuse plan calls for the land to be part of an economic development conveyance and used for the development of light industry. Implementation status: According to local officials, reuse efforts have been hampered by a lack of specificity in the local reuse plan, delays in property disposition decisions, and delays in negotiating a caretaker agreement and leases. The final property decision has been delayed pending the Air Force’s approval of the proposed size of the Reserve cantonment area. Despite these delays, some actions have been completed. The caretaker agreement has been finalized, and a caretaker account has been established and funded. The lease on the golf course has also been signed and two more leases have been requested. The Air Force informed local officials that lease processing procedures have been improved and leases can now be processed within 120 days. Civilian jobs lost due to closure: 807. Civilian jobs created as of 3/31/95: 28. The Economic Development Administration grant was awarded to the State of Indiana to plan for mitigating the adverse effects associated with the base’s closure. National Priorities List site: No. Contaminants: Household and industrial waste, spent solvents, fuels, waste oil, pesticides, lead, silver, munitions, and asbestos. Estimated cost of cleanup: $25.6 million. Estimated date cleanup complete or remedy in place: March 1998. Base description: The base is located on 55,264 acres, mostly forest land, near Madison in southeastern Indiana about 45 miles northeast of Louisville, Kentucky. Over 50,000 acres are contaminated with unexploded ordnance. The facility was constructed in 1941 and has been used over the years to test ammunition and weapon systems. Most of the facility was placed on standby status in 1946, reactivated in 1950, again placed on standby in 1958, and reactivated in 1961. Date of closure recommendation: 1988. Date of military mission termination: September 1994. Estimated date of base closure: September 1995. Summary of reuse plan: The Army plans to transfer about 47,500 acres to the U.S. Fish and Wildlife Service for preservation as a wildlife refuge for migratory birds. Such an action would eliminate the need to cleanup the unexploded ordnance, which could cost between $215 million and $2 billion, depending on the level of cleanup. The three adjoining counties want the remaining land conveyed to them for economic development. However, the International Union of Operating Engineers has proposed purchasing about 5,000 acres of the property for a training center. The union is offering to buy the property and do the environmental remediation, since that would fit into the kind of training it plans for the site. A plastics manufacturer has indicated interest in the same property. Consequently, the Army is considering a market sale to the highest bidder of the 5,000 acres, conveying the remaining 2,764 acres to the counties for economic development. Golf course: None. Implementation status: The Army issued an Invitation to Bid for 4,320 acres of property not contaminated with unexploded ordnance. Meanwhile, the local authority is submitting an economic development conveyance request for the same property. The Army plans to have all the property disposed of by September 30, 1995, when base closure funds for operations and maintenance costs run out. Initially, disposal to non-federal agencies would be through leases. Later, when cleanup requirements were met, the property would be sold. Army officials think that Jefferson Proving Ground will be a significant base closure success story, due to the savings in cleanup costs made possible by the property transfer to the Fish and Wildlife Service and due to the envisioned economic development of the remaining property. However, transfer of the property to the Wildlife Service faces several obstacles. According to a base official, the Wildlife Service is concerned about possible liability should someone enter the property and be injured by the unexploded ordnance, and the Wildlife Service lacks money in its budget to staff and maintain the preserve. Meanwhile, the Air National Guard has asked the Air Force to request the property for an expanded bombing and strafing area. Furthermore, the Environmental Protection Agency has not agreed that no environmental remediation is needed in the proposed wildlife refuge. The Wildlife Service opposes remediation because the agency does not want the habitat disturbed. The Environmental Protection Agency, however, is considering whether to place the base on the National Priorities List, which would require environmental remediation at the base. The Army maintains that the unexploded ordnance is a safety problem, not a hazardous waste problem. A joint committee is studying the issue. The Environmental Protection Agency will likely require the Army to drill some wells to monitor subsurface, as well as surface water, for years to come. Civilian jobs lost due to closure: 387. Civilian jobs created as of 3/31/95: Base not yet closed. The state of Indiana received a $50,000 Economic Development Administration grant to plan for economic adjustment associated with closure of the base. The Madison Chamber of Commerce received an Economic Development Administration grant of $850,000 to construct a new building in Madison for business incubator and technical training programs. Former base employees will have priority in terms of starting new businesses at the site. National Priorities List site: No. Contaminants: Solvents, petroleum products, heavy metals, depleted uranium, and unexploded ordnance. Estimated cleanup cost: $10.9 million (assuming unexploded ordnance will not have to be cleaned up). Estimated date cleanup complete or remedy in place: May 1997. Base description: The depot is located on 780 acres, 10 miles east of Lexington. It has 1.8 million square feet of covered storage space. It was established in 1941, and it has been used to store radar and communications equipment. Depot properties, including buildings and the golf course, have deteriorated since the closure decision was announced and the Army curtailed its maintenance. Date of closure recommendation: 1988. Estimated date of military mission termination: September 1995. Estimated date of base closure: September 1995. Summary of reuse plan: The Army is retaining one building located on 4 acres of land for a Defense Finance and Accounting center. The state of Kentucky has signed a 7-year lease for the rest of the property, and it is covering the cost of renovation and repair instead of lease payments to the Army. The state plans to request 210 acres as a public benefit transfer for park and recreational use, and it is requesting that the remaining 566 acres of the property be conveyed to it through an economic development conveyance. Golf course: The deterioration of the 9-hole golf course has made it unusable as a golf course. In determining the course’s fair market value, the appraisal was modified to categorize it as unimproved ground. The state plans to request the golf course as part of the public benefit transfer for park and recreation purposes. Implementation status: The state appropriated $1.8 million to rehabilitate deteriorating buildings and cover operating costs. Current operations by a military contractor at the base are providing about 500 jobs. The state also has a sublease with the Kentucky National Guard for training-related use of several buildings and some base land. Under a DOD contract, the state is using some buildings for processing military equipment being brought back from Europe, and it is negotiating to sublease additional space to several other organizations. Civilian jobs lost due to closure: 1,131. Civilian jobs created as of 3/31/95: Base not yet closed. National Priorities List site: No. Contaminants: Volatile and semivolatile organic compounds, heavy metals, polychlorinated biphenyls, pesticides, and herbicides. Estimated cleanup cost: $25 million. Base officials are awaiting Environmental Protection Agency and state approval of remediation plans. Estimated date cleanup complete or remedy in place: To be determined. Base description: The naval station and hospital, as well as several housing areas and a golf course, are located on 932 acres at various sites in the Long Beach area. Portions of the property lie within the Long Beach city limits, while other portions are in nearby Los Angeles county towns. The Navy began acquiring property for the station in 1935. In 1946, the station was chartered to provide welfare, recreation, and social facilities, in addition to maintaining facilities for the operation and berthing of tugboats, barges, and similar vessels. In 1964, the U.S. government purchased the land for the hospital from the city of Long Beach, and the hospital was commissioned in 1967. Date of closure recommendation: 1991. Date of military mission termination: Hospital—March 1994 and Naval Station—September 1994. Date of base closure: Hospital—March 1994 and Naval Station—September 1994. Summary of reuse plan: The Navy plans to transfer 592 acres, including the main station, the golf course, and over 1,000 family housing units, to the naval shipyard. It also plans to transfer 17 acres to the Department of Labor for a Job Corps training center. The Long Beach school district received 62 acres as an educational public benefit transfer in September 1994. California State University, Long Beach, requested an economic development conveyance of 30 acres, which include 294 family housing units. The Navy expects that 148 acres will be conveyed for future expansion of Long Beach and Los Angeles port facilities and transportation corridors to the ports. Plans call for at least 26 acres to be used for homeless assistance, including 204 family housing units. Disposition of the remaining 57 acres, including the naval hospital, is undetermined. Additional acres are being considered for homeless assistance groups under the Base Closure Community Redevelopment and Homeless Assistance Act of 1994. DOD has recommended to the 1995 Base Realignment and Closure Commission that the naval shipyard be closed. If this recommendation is sustained, the property being transferred to the shipyard will be disposed of as part of the shipyard closure process. Golf course: The golf course property is owned by the Army and leased to the Navy through an indefinite lease. The Navy plans to retain the golf course, which is located about 10 miles from the naval station and 3 miles from the naval hospital, transferring the course to the naval shipyard. Implementation status: A reuse plan for the Los Angeles portion of the property has not yet been completed. Reuse disputes between Long Beach and nearby communities have led to delays in property disposition decisions. The Long Beach plan calls for the hospital to be converted into a retail center, while an opposing plan supported by nearby communities calls for it to become a Los Angeles County Office of Education administrative building. DOD’s Office of Economic Adjustment hired a consultant to do an independent study that the Navy will use, along with the environmental impact study, to determine the preferred use for the property. The Long Beach plan calls for the Navy to sell the property for about $20 million, while the other plan involves an educational public benefit transfer. A draft environmental impact statement for the hospital was published in February 1995. The Navy expects to make its property disposition decisions in July 1995. Homeless assistance plans have not been settled. In response to a community challenge, the Department of Housing and Urban Development reversed its position and declared that 66 of the 140 housing units at the Taper Avenue housing site designated for a homeless assistance provider are unsuitable for that purpose because they are located too close to some aviation fuel tanks. A community group also asked the Department of Health and Human Services to reexamine the provider’s suitability to undertake such a project. Another homeless assistance provider that was approved to receive a portion of the Savannah/Cabrillo housing lost financial backing and was therefore disqualified to receive it. Both Los Angeles and Long Beach are developing new plans to address homeless needs. The city of Long Beach is still committed to using 26 acres of the property for homeless assistance, possibly through temporary leasing of some facilities. Civilian jobs lost due to closure: 417. Civilian jobs created as of 3/31/95: Not available; most of the property is being retained for naval shipyard. National Priorities List site: No. Contaminants: Petroleum hydrocarbons, paints, solvents, asbestos, trichloroethylene, and battery acid. Estimated cleanup cost: $125.3 million. Estimated date cleanup complete or remedy in place: To be determined. Base description: Loring is located on 9,482 acres and is 5 miles from the Canadian border in Limestone, Maine, near the town of Caribou. Along with the approximately 8,700-acre main base, Loring has several off-site parcels in nearby towns, which include housing tracts. Prior to closure, Loring was home to B-52 bombers and KC-135 tankers. Date of closure recommendation: 1991. Date of military mission termination: March 1994. Date of base closure: September 1994. Summary of reuse plan: DOD will retain 400 acres for use by the National Guard and 14 acres for a Defense Finance and Accounting Service center. The Fish and Wildlife Service will receive 6,000 acres for a wildlife preserve. The Bureau of Indian Affairs will receive about 600 acres of property at the main base and about 60 housing units on 20 acres in the nearby town of Presque Isle. This property will be held in trust by the Bureau for reuse by the Aroostook Band of the Micmac Indian Tribe. The Air Force also plans to transfer 50 acres to the Department of Labor for a Job Corps training center and 18 acres through public benefit transfers for several educational programs. The remaining 2,380 acres will likely be disposed of through an economic development conveyance. The initial reuse plan called for the base to be used for an airport and aviation-related enterprises. The reuse plan asks the federal government to pay $35 million of the projected $40 million in base conversion costs over 20 years, including the cost of demolition of buildings. In addition, local officials want DOD to cover base caretaker costs for 15 years. Golf course: The 9-hole golf course has been leased to the local authority. The authority plans to request the golf course as part of an economic development conveyance. Implementation status: A joint study by the Federal Aviation Administration and the Maine Department of Transportation concluded that another airport was not needed in the region. The Federal Aviation Administration indicated, however, that it would consider approving plans for an airport at Loring if a market developed for an air cargo operation that needed a heavy, long runway. Loring, however, is experiencing the same difficulties as other rural bases in attracting aviation-related businesses. Since its closure, the base has been maintained under a 3-year caretaker agreement. Under the agreement, the Air Force covers nearly 100 percent of the caretaker costs for the first year, but the percentage is expected to decline in subsequent years if businesses can be attracted to the base. According to a base official, the Defense Finance and Accounting System center should be in operation by the summer of 1995, which will provide about 500 jobs within 2 years. The local authority is hopeful the center will act as a catalyst to attract other businesses to the base. Civilian jobs lost due to closure: 1,326. Civilian jobs created as of 3/31/95: 144 (these jobs are related to caretaker operations). The Economic Development Administration awarded $1,590,000 to the city of Fort Fairfield to increase the capacity of the sewage treatment facility and $677,000 to the Northern Maine Development Commission for technical assistance. The Federal Aviation Administration grant was awarded to the local authority for airport facilities and equipment. Environmental cleanup: National Priorities List site: Yes. Contaminants: Volatile organic compounds, waste fuels, oils, spent solvents, polychlorinated biphenyls, pesticides, and heavy metals. Estimated cleanup cost: $141.9 million. The cleanup of contaminants at Loring is progressing. The Air Force is signing agreements with environmental regulators, and the base cleanup team is facilitating the work. Through interagency cooperation, $10 million was saved by combining the cleanup of two sites. The Environmental Protection Agency granted a waiver to allow marginally contaminated soil that had to be cleaned from a quarry to be used to cap a land fill. However, the base’s inclement weather restricts cleanup work to the summer months, slowing cleanup completion. Estimated date cleanup complete or remedy in place: September 1999. Base description: Lowry is located on 1,866 acres in a suburban area between Denver and Aurora. The base was established in 1937 as an Army Air Corps technical school, and it has been used as a technical training center since that time. In addition, a Defense Finance and Accounting center and the Air Reserve Personnel Center are located on the base. Date of closure recommendation: 1991. Date of military mission termination: April 1994. Date of base closure: September 1994. Summary of reuse plan: The plan calls for mixed-use urban development combining business, training, education, recreation, and residential uses to make maximum use of existing facilities and land. DOD will retain 115 acres for the Defense Finance and Accounting center, an Air Reserve personnel center, and the 21st Space Command Squadron. The Air Force is conveying 220 acres in educational public benefit transfers to a consortium of Colorado colleges and the Denver public school system for educational and job training centers. Initially, homeless assistance providers were approved to receive 47 acres, including 200 family housing units and dormitories. However, under a plan worked out with the city of Denver and the Department of Housing and Urban Development, the providers will withdraw their requests for some of this property in return for funding to establish homeless facilities at dispersed locations in the Denver metropolitan area. The officials involved believe this plan will better meet the needs of the homeless than would concentrating the facilities at Lowry. In addition, parks and recreation public benefit transfers will total 175 acres. Health-related public benefit transfers totaling 22 acres will be used for such purposes as a blood bank and a research center. An economic development transfer of 711 acres will go to the Lowry Economic Redevelopment Authority. This acreage will increase if homeless providers withdraw some of their requests for base property as expected. Market sales are planned for the remaining 576 acres, including the golf course and residential areas. Golf course: The golf course is under interim lease to the city of Denver. Its sale awaits environmental clearances. A residential landfill adjacent to the golf course may not require appreciable cleanup if its future use is open space or recreational, such as extension of the golf course. Implementation status: Following final decisions on the disposition of base property in August 1994, base closure officials have been proceeding with disposition agreements. The community college educational consortium has signed an interim lease and is conducting 80 courses for 800 students. Several other leases have been signed or are being negotiated. Four of the homeless providers have withdrawn their requests for base housing in return for a contract with the city to provide space elsewhere in the community. Pending final environmental clearances, long-term leases will be used to promote immediate reuse on most parcels. In addition, negotiations have begun between the Air Force and the local authority regarding property sales and the economic development conveyance. The economic development conveyance negotiations involve an up-front fair market settlement price in accordance with recent regulations. Civilian jobs lost due to closure: 2,290. Civilian jobs created as of 3/31/95: 104. The Economic Development Administration grant to the cities of Aurora and Denver has provided funds to prepare a work plan for identifying market opportunities for businesses affected by the base closure. National Priorities List site: No. Contaminants: Waste oil, general refuse, fly ash, coal, metals, and fuels. Estimated cleanup cost: $18.8 million. Estimated date cleanup complete or remedy in place: September 1999. Base description: Mather is located on 5,716 acres in the suburbs of Sacramento. The base was first activated in 1918 as a combat pilot training school, then placed on inactive status from 1922 until 1930 and again from 1932 until 1941. More recently, Mather hosted a Strategic Air Command Bombardment Wing and an Air Refueling Group. Date of closure recommendation: 1988. Date of military mission termination: May 1993. Date of base closure: September 1993. Summary of reuse plan: Under the plan, the Air Force will retain the 26-acre hospital and the Army will retain 31 acres for the National Guard. In addition, the Veterans Administration is requesting a 20-acre site to construct a new clinic and nursing home. Public benefit transfers will include 2,883 acres for the airport, 1,462 acres for county parks and recreation, and 95 acres for educational purposes such as a law enforcement training center. In addition, 28 acres are to be transferred to the Sacramento Housing and Redevelopment Agency to provide facilities for the homeless, including 60 family housing units and 200 single housing units. The plan calls for the remaining 1,171 acres to be sold, including the 174-acre golf course and 997 acres for commercial, industrial, and residential development. Golf course: The Air Force disposed of the golf course through a negotiated sale to the county for $6 million. Implementation status: The airport transfer was delayed over air quality issues. However, a long-term lease conveyance was signed in March 1995 to begin civilian airport use. Some of Mather’s missions moved to nearby McClellan Air Force Base, and some air emission mitigation measures may be needed to permit civilian aviation activities at Mather. Utility system and infrastructure costs have also posed some difficulties. Local utility companies have been asked to purchase these systems, but they are concerned about the cost of upgrading the systems. The municipal utilities district estimated it would cost between $2.5 million and $3 million to upgrade the electrical distribution system. The negotiated sale of the housing has been abandoned due to contentions over fair market value. Instead, the Air Force will sell the housing publicly. Furthermore, according to a base official, sale of developable parcels of land at Mather will likely be piecemeal, requiring more time and effort. Civilian jobs lost due to closure: 1,012. Civilian jobs created as of 3/31/95: 241. Sacramento received the Economic Development Administration grant to assist with the preparation of an economic development plan and the Federal Aviation Administration grant for an airport reuse feasibility study. National Priorities List site: Yes. Contaminants: Solvents, cleaners, volatile organic compounds, plating waste, and heavy metals. Estimated cleanup cost: $94 million. Estimated date cleanup complete or remedy in place: September 1997. Base description: The station was located on 1,577 acres on San Francisco Bay in Mountain View, near Sunnyvale, 7 miles north of San Jose. It was originally commissioned in 1933 as the home base for a Navy dirigible. Its recent mission was to support anti-submarine warfare training and patrol squadrons. The National Aeronautics and Space Administration’s (NASA) Ames Research Center lies adjacent to the Naval Air Station at Moffett. Lockheed Missile and Space Company and other government contractors in the adjacent community also use the airfield. The Onizuka Air Force Station, a satellite tracking and control operation, is also located adjacent to Moffett, but it has no airfield or planes, and it does not use the Moffett runway. The 1991 BRAC Commission recommended that the federal government transfer the entire naval air station directly to NASA. Date of closure recommendation: 1991. Date of military mission termination: July 1994. Date of base closure: July 1994. Summary of reuse plan: The Navy’s plan called for the no-cost transfer of 1,440 acres to NASA and 130 acres of base housing to the Air Force. A 7-acre off-base site of former housing is to be sold for a negotiated price to the city of Sunnyvale, which plans to use the site for developing affordable housing. NASA plans for airfield facilities to be used by various NASA tenants, including Lockheed, an Army medical evacuation unit, and Bay Area Reserve and National Guard units, some of which are relocating from other closing Bay Area bases. NASA itself will only use 10 percent to 20 percent of the property, and its operations are expected to make up only about 30 percent of the airfield’s use. Golf course: The golf course is part of the property being transferred to NASA, which is having the Air Force operate it through its Morale, Welfare, and Recreation program. As with other federal agency uses of Moffett facilities, the Air Force contributes proportionally to NASA for overall operations and maintenance costs. Implementation status: The active duty Navy mission ceased, and the base was transferred to NASA on July 1, 1994. As of November 1994, a NASA official reported that NASA had received commitments for about 80 percent of the available buildings and 50 percent of the airfield use. NASA is marketing Moffett property only to federal agencies and contractors because of the BRAC decision that it be kept as a federal facility. As more bases close, NASA hopes to attract more military and military-related units. However, DOD has recommended to the 1995 BRAC Commission that the Air National Guard unit at Moffett be moved to McClellan Air Force Base and that the Onizuka Air Force Station be downsized. Furthermore, NASA faces major budget cuts in coming years and is questioning whether it can handle the operational costs of Moffett Field under the current arrangements. Civilian jobs lost due to closure: 633. Civilian jobs created as of 3/31/95: 194. National Priorities List site: Yes. Contaminants: Volatile and semivolatile organic compounds, petroleum products, heavy metals, polychlorinated biphenyls, battery acid, polynuclear aromatic hydrocarbons, benzene, toluene, ethylbenzene, and xylene. Estimated cleanup cost: $52.9 million. According to the agreement between the Navy and NASA, the Navy did not have to certify that the property was clean before the transfer took place. However, the agreement calls for the Navy to remain responsible for the cleanup, which may extend to the year 2010. Estimated date cleanup complete or remedy in place: 2010. Base description: This base is located on 3,744 acres by the Atlantic coast, 100 miles north of Charleston, in an area with many resort beaches and golf courses. Beginning in 1939, the site was used as a municipal airport. In 1941, the War Department acquired the airfield from the city of Myrtle Beach. It was used for training throughout World War II and was then deactivated, and the runways and tower were given to the city. The Air Force reacquired the airfield from the city in 1955. Most recently, it was home to a tactical fighter mission. Date of closure recommendation: 1991. Date of military mission termination: September 1992. Date of base closure: March 1993. Summary of reuse plan: The plan calls for a 1,247-acre airport public benefit transfer. It further designates 1,555 acres to be included in a land exchange with the state of South Carolina, as authorized by Public Law 102-484, section 2832. In return, the Air Force will receive 12,521 acres of forested land near Shaw Air Force Base for a bombing range, a portion of which the Air Force had been leasing. Also under the plan, the 224-acre golf course will be a public benefit transfer to the city for a municipal golf course, and a 12-acre site is designated as an educational public benefit transfer for a fire training center. The Air Force plans to sell the chapel and credit union properties, totaling about 4 acres. The disposition of the remaining 702 acres, including 800 housing units, is undetermined, but could include mixed-use redevelopment and airport expansion. Accordingly, the redevelopment authority and the Air Force are discussing possible negotiated sale or economic development conveyance. A developer has offered $11.1 million for the housing. Several housing units have been requested for homeless assistance, which DOD indicated is consistent with the planned residential use of the facilities. Golf course: The Air Force planned to dispose of the golf course through a negotiated sale to the state. However, the city requested the course as a public benefit transfer for use as a municipal course. This request was subsequently endorsed by the Department of the Interior and approved by the Air Force. A private developer had offered $3.5 million for the course. Implementation status: A conflict between the city and the county over the need for and expansion of the airport caused delays in property disposition decisions. State legislation created a central authority to handle the dispute and make reuse decisions. Of the property exchanged with the state, the state has sold 69 acres to an electronics firm and is in the process of selling much of the rest for private development of a tourist resort complex. However, environmental cleanup clearances are needed before the deal is finalized. The Air Force will sell the 1.78-acre credit union site to the credit union for about $76,500, and a tentative agreement has been reached to sell a 2-acre chapel site for $280,000. One homeless assistance request remains under consideration by the Air Force. Civilian jobs lost due to closure: 799. Civilian jobs created as of 3/31/95: 588. The Economic Development Administration grants consisted of $1 million to the Grand Strand water and sewage authority and $2.5 million to the city of Myrtle Beach to construct water and sewage facilities. The Federal Aviation Administration grants were awarded for planning, a noise abatement study, airport construction projects, and equipment, such as rescue and fire-fighting equipment. In addition, before the 1991 base closure decision, the Federal Aviation Administration provided $13.1 million in grants to help develop civilian airport facilities. National Priorities List site: No. Contaminants: Spent solvents, fuel, waste oil, volatile organic compounds, heavy metals, asbestos, and paints and thinners. Estimated cleanup cost: $27 million. Estimated date cleanup complete or remedy in place: March 1997. Base description: Norton is located on 2,115 acres adjacent to the city of San Bernardino, 60 miles east of Los Angeles. The base was activated in 1942, and its primary mission included maintenance of aircraft and aircraft engines. In 1966, its mission changed to maintaining airlift capability. Date of closure recommendation: 1988. Date of military mission termination: June 1993. Date of base closure: March 1994. Summary of reuse plan: The plan calls for 78 acres of housing to be retained by the Air Force for personnel at nearby March Air Force Base. When the March base, which was recommended for realignment in 1993, declares this property excess, it will be disposed of. Under the plan, DOD will retain 34 acres for a Defense Finance and Accounting Service center and transfer 33 acres, including a headquarters building and aircraft space, to the Forest Service for its fire-fighting operations. Furthermore, public benefit transfers will include 1,267 acres for an airport, 24 acres for parks and recreation, and 10 acres for educational purposes to local colleges. Other public benefit transfers will include the 4-acre chapel and youth center sites, which will go to a homeless assistance provider, 24 acres for roads and road widening, and the base’s water and sewer system. The remaining 641 acres will be an economic development conveyance, under the terms of an agreement that guarantees $52 million in revenue to DOD after 15 years. Under this agreement, DOD will receive 40 percent of the gross revenues from leases and 100 percent of the proceeds from any property sales. After 15 years, the authority is to pay DOD any remaining balance. The San Manuel Indians have expressed interest in purchasing a parcel of land for light manufacturing use, and they are also pursuing a request through the Bureau of Indian affairs for a building to be used as a clinic. Golf course: The local redevelopment authority submitted a $6-million bid for the golf course as part of the $52-million economic development package, which was accepted by the Air Force. The authority leased the course for $190,000 annually prior to the sale. Implementation status: Reuse was delayed by a homeless request for a major portion of the base that subsequently fell through. Initially, the disposition of the utility systems was also disputed, but the dispute was resolved. A final agreement on the economic development conveyance was signed in March 1995; the agreement obligates the authority to pay the Air Force $52 million within 15 years for the 641 acres, including the golf course and the utility systems other than sewer and water, which will be conveyed for public health purposes. The authority is already negotiating seven subleases, under which the tenants will receive free rent for 6 to 12 months in return for renovating the old buildings. Until the environmental cleanup is complete, most property is being disposed of under leases instead of deed transfers. According to base closing officials, processing leases and deed transfers has been time-consuming. Public benefit transfers have been delayed because the sponsoring federal agencies are reluctant to transfer property where cleanup has not been completed. The Air Force is preparing long-term leases in lieu of assignment to sponsoring agencies. Civilian jobs lost due to closure: 2,133. Civilian jobs created as of 3/31/95: 25. The Economic Development Administration funds were awarded to the city of San Bernardino to improve the roads and water system at Norton. The Federal Aviation Administration grants were awarded to the local authority for $118,638 to develop an airport master plan and for $2.1 million for airport construction and improvements. National Priorities List site: Yes. Contaminants: Waste oils and fuel, spent solvents, paints, refrigerants, heavy metals, and volatile organic compounds. Estimated cleanup cost: $117.4 million. Estimated date cleanup complete or remedy in place: December 2000. Base description: Pease is located on 4,257 acres at Portsmouth in southeastern New Hampshire. It started operations in 1956 as a Strategic Air Command base; its mission was to maintain a force capable of long-range bombardment and air-to-air refueling operations. Date of closure recommendation: 1988. Date of military mission termination: September 1990. Date of base closure: March 1991. Summary of reuse plan: The Air Force retained 230 acres for the Air National Guard and transferred 1,095 acres to the Fish and Wildlife Service for a wildlife refuge. Local authorities requested a 2,305-acre airport public benefit transfer and a 600-acre economic development conveyance, which would include revenue-generating property to support airport operations. The New Hampshire state transportation agency will receive a 27-acre conveyance for highway widening. Golf course: The local authority requested that the golf course be included as part of the economic development conveyance, but they are reevaluating their request. Meanwhile, the golf course is being leased to the local authority for $100,000 annually. Implementation status: A portion of the base, including the airfield, is under lease to the local authority, and 41 tenants have created more than 1,000 jobs thus far. A commercial airport and an aircraft maintenance complex are in operation. Other tenants include the U.S. Department of State’s passport and visa processing center and a biotechnology firm. The state has made a large financial commitment to the fledgling airport, including $16 million a year in operating loans and over $100 million in bonding guarantees for business development. The Air Force remains the caretaker of about 1,050 acres that have not been leased. Although it has been 3 years since property disposition decisions were made, no deeds have been transferred. According to base officials, considerable time and effort have been spent on preparing environmental studies and reports and seeking cleanup approvals, but no end is in sight. On August 29, 1994, in a suit brought by the Conservation Law Foundation and the town of Newington, the U.S. District Court ruled that the Air Force violated section 120(h) of the Comprehensive Environmental Response, Compensation and Liability Act by transferring property under a long-term lease without an approved remedial design. However, the lease was not invalidated. This ruling has affected DOD’s leasing practices at other closing bases. The court also ordered the Air Force to prepare a supplemental environmental impact statement, which will be complete in July 1995. Civilian jobs lost due to closure: 400. Civilian jobs created as of 3/31/95: 1,038. To assist with industrial development, the Economic Development Administration awarded grants amounting to $8,475,000 to the Pease Development Authority to renovate or demolish buildings and to widen the main roadway entrance to the base to facilitate public access. In addition, the Pease community and the Portsmouth Naval Shipyard community are expected to share the benefits of a $3,450,000 Economic Development Administration grant to the New Hampshire state port authority for the construction of a barge facility in the area. Federal Aviation Administration grants were awarded for planning, preparing a noise compatibility study, installing equipment, and improving the airport. The largest of the grants was $3.8 million to rehabilitate a runway. In addition to the grants shown above, the Department of Transportation provided $400,000 for a surface transportation study and the Environmental Protection Agency provided $120,000 for a watershed restoration study. National Priorities List site: Yes. Contaminants: Volatile organic compounds, organic solvents, spent fuels, waste oils, petroleum/oils/lubricants, pesticides, paints, and elevated metals. Estimated cleanup cost: $140 million. Estimated date cleanup complete or remedy in place: November 1997. Base description: These naval facilities are located on 1,502 waterfront acres, 4 miles south of Philadelphia’s central business district. The 348-acre shipyard includes piers and water acres that contain a mothballed fleet. The BRAC Commission determined that the shipyard should be closed and preserved so that it would be available if needed in the future. The 1,105-acre naval station is adjacent to the shipyard. The property was deeded to the Navy by the city in 1868. The 49-acre hospital property is located about 1 mile from the base. The main hospital building was completed in 1935. Date of closure recommendation: Hospital—1988, Naval Station and Shipyard—1991. Estimated date of military mission termination: September 1995. Estimated date of base closure: Naval Station—January 1996 and Shipyard—September, 1996. Summary of reuse plan: Under the current plan, the Navy will retain 550 acres, including the shipyard. The plan calls for the National Park Service to receive 1 acre and for most of the hospital property to be public benefit transfers—30 acres for park land and 6 acres for a nursing home. The remaining 13 acres of hospital property to be sold for residential development. Reuse plans for 902 acres containing most of the naval station property have not been determined. The emphasis of the reuse plan is on economic development and job creation. The reuse authority hopes to encourage businesses, both large and small, to use existing buildings, and there is one large open site, the former airfield, that is suitable for large site development. Golf course: None. Implementation status: Local authorities’ initial challenge of the closure decision delayed the start of reuse planning for the closing facilities. In early 1994, the U.S. Supreme Court ruled against the challenge. The local reuse committee has completed a conceptual reuse plan, which seeks to attract private business and redevelop the area through economic development transfers and long-term leases. According to the base closure officer, although base cleanup will take 5 more years, most base property could be leased and no environmental issues should prevent reuse from occurring. In November 1994, the Navy and the city executed a master lease that permits the city to sublease the preserved shipyard facilities, thus allowing for job creation at the facility. Civilian jobs lost due to closure: 8,119. Civilian jobs created as of 3/31/95: Base not yet closed. The Office of Economic Adjustment has provided about $2 million in planning grants. In April 1995, the Office also awarded a $50-million grant to establish a revolving loan fund to invest in projects that would accelerate the conversion of the naval station and shipyard to civilian use. Economic Development Administration grants awarded to the city of Philadelphia included $1.6 million to establish a revolving loan fund to assist in the conversion of defense dependent industries and $1.1 million for a feasibility study on the potential commercial reuse of shipyard and hospital buildings and specialized equipment to determine the best use and whether there are market matches. The study also was to determine the feasibility of extensive asbestos removal from the hospital building. In addition, the Navy is expending $16 million in military construction funds to improve utility systems on the retained portion of the base. Furthermore, the 1995 Defense Appropriations Act directed the Navy to spend $14.2 million for similar utility improvements on the portion of the base that is being disposed of. National Priorities List site: No. Contaminants: Heavy metals, polychlorinated biphenyls, petroleum/oil/lubricants, solvents, and volatile organic compounds. Estimated cleanup cost: $120 million. Estimated date cleanup complete or remedy in place: 1999. Base description: The Presidio is located on 1,480 acres in San Francisco fronting the ocean and San Francisco Bay. It has been a military garrison for 220 years, occupied by Spain, Mexico, and the United States, and was designated a national historic landmark in 1962. The property includes the Letterman Army Medical Center and the Army Institute of Research, as well as a former Public Health Service hospital. Legislation enacted in 1972 to create the Golden Gate National Recreation Area included a provision mandating the transfer of the Presidio to the National Park Service if DOD determined the base was in excess of its needs. Date of closure recommendation: 1988. Date of military mission termination: September 1994 (Sixth Army Headquarters—September 1995). Estimated date of base closure: September 1995. Summary of reuse plan: The Army is transferring the entire 1,480-acre base to the National Park Service to become part of the Golden Gate National Recreation Area. The plan calls for the creation of a nonprofit corporation called the Presidio Trust to manage the conversion of the base into a park and to be responsible for the renovation and leasing of facilities. Golf course: The golf course will be transferred to the Park Service by October 1995. The Park Service is seeking a concessionaire to operate the course, and it plans to use revenues from the course, which could exceed $1 million annually, to help support park operations. Implementation status: After months of discussions and considerable controversy, the Army and the Park Service agreed on the transfer terms, and the property was transferred to the Park Service on October 1, 1994. The Army retained an irrevocable special use permit for a portion of the base to be used by Sixth Army headquarters. However, in December 1994, the Army announced that it would cease operations at the Presidio by October 1995, at which time the Park Service will have sole responsibility for the costly maintenance of the Presidio. Since the 1994 Congress did not authorize the Presidio trust, the Park Service is handling conversion efforts. The Park Service had hoped to lease the Letterman complex to the University of California Medical Complex, but the university announced in December 1994 that it would not lease the facility. Civilian jobs lost due to closure: 3,150. Civilian jobs created as of 3/31/95: 725. In addition, before turning the property over to the Park Service, the Army spent $69 million to upgrade various features of the base’s infrastructure, including its sewer systems, water treatment facilities, electrical systems, and roofs. However, these repairs do not address bringing the base’s buildings up to local codes. National Priorities List site: No. Contaminants: Petroleum hydrocarbons, heavy metals, solvents, and pesticides. Estimated cleanup cost: $104.6 million. Estimated date cleanup complete or remedy in place: July 1996. Base description: The 151-acre base is located on Lake Washington in Seattle. In 1922, the Navy established a 366-acre air station at the site. The Navy surplused 215 acres, including the airfield in 1973, which became home to the National Oceanic and Atmospheric Administration and the city’s Magnuson Park. The remaining property has served as a Navy administrative facility and includes a small research facility for the Fish and Wildlife Service. Date of closure recommendation: 1988—partial closure; 1991—full closure. Estimated date of military mission termination: September 1995. Estimated date of base closure: September 1995. Summary of reuse plan: The Navy plans to transfer 10 acres to the National Oceanic and Atmospheric Administration, which it will use to expand operations at its adjacent facility. The Fish and Wildlife Service is to receive 4 acres, which is the site of an on-base laboratory it currently operates. Seattle’s reuse plan calls for the remainder of the base to be public benefit transfers of 18 acres for homeless assistance, 82 acres for parks and recreation, 21 acres for educational activities, and 16 acres for roadways. Under this plan, homeless providers would receive 18 acres, including 3 family housing units and 197 single housing units. Under the provisions of the Base Closure Community Redevelopment and Homeless Assistance Act of 1994, the city is interested in incorporating the homeless housing with the development of mixed housing on that property. The Bureau of Indian Affairs requested the majority of the base (85 acres) on behalf of the Muckleshoot Indian tribe, which seeks to use the property for educational and economic development activities. The Muckleshoots have indicated a willingness to reduce the size of their request if the city is willing to negotiate. Golf course: None. Implementation status: The city of Seattle opposes the Muckleshoot plan, saying it is incompatible with the community’s reuse plan. The city also opposes the tribe’s gaining sovereignty over base property, which would remove it from local zoning and land use regulations. The Department of Interior has asked DOD to give the Bureau of Indian Affairs’ request priority under federal rules for disposing of excess property. As long as 2 years ago, DOD asked the parties to work on a joint reuse plan. DOD’s property disposition decision is pending because of the issue, which is delaying reuse progress at the base. Base closure and community officials doubt that the stalemate at the local level will be broken without a DOD policy decision more clearly defining Native American status in the base closure screening process and the concept of sovereignty as it applies to base closure sites not located on a reservation. Civilian jobs lost due to closure: 754. Civilian jobs created as of 3/31/95: Base not yet closed. National Priorities List site: No. Contaminants: Petroleum products and metals. Estimated cleanup cost: $5.2 million. Estimated date cleanup complete or remedy in place: January 1995. Base description: The station is located on 428 acres on the southern edge of Kansas City. The city conveyed the property to the Air Force to establish the base in 1953. Until 1970, the Air Defense Command had the primary mission on the base. In 1979, the Air Force phased down the base, and in 1980, the Air Force Reserve assumed operational control. In 1985, the Air Force transferred ownership of much of the airfield to the city, but the city was unable to develop a successful commercial airport, and the Air Force Reserve has remained the biggest user. Date of closure recommendation: 1991. Date of military mission termination: July 1994. Date of base closure: September 1994. Summary of reuse plan: DOD plans to retain 238 acres—184 acres for the Army Reserves and 54 acres for the Marine Corps. Most of the remaining property, about 178 acres, will be a public benefit transfer to the city to expand the airport. The city of Belton plans to purchase the remaining 12 acres at fair market value. Golf course: None. The golf course was disposed of when the Air Force property was transferred in 1985. Implementation status: The Air Force has turned responsibility for control tower operations and navigational maintenance over to the city. In addition, annual Air Force payments of $265,000 to partly cover airfield operations ceased as of October 1994. A final decision on property disposition was signed in April 1995. Civilian jobs lost due to closure: 569. Civilian jobs created as of 3/31/95: 0. The Federal Aviation Administration grants awarded to the Kansas City aviation department included $228,000 for an airport master plan, $744,000 for facilities and equipment, and $600,000 for grading and drainage. In addition, prior to the 1991 closure decision, the department received $955,800 in Federal Aviation Administration funds in 1990 for new runway approach lights. National Priorities List site: No. Contaminants: Petroleum/oil/lubricants, aqueous film-forming foam, polynuclear aromatic hydrocarbons, and solvents. Estimated cleanup cost: $5 million. Estimated date cleanup complete or remedy in place: September 1998. Base description: The base is located on 2,015 acres about 12 miles southeast of downtown Columbus. Construction of the base began in January 1942, and it was activated as a training center for Army Air Corps glider pilots. The base was deactivated in 1949 and reactivated in 1951 as a Strategic Air Command base supporting the Korean War build-up. The Air Force base closed in 1978, and the airfield was leased long-term to the community in 1984. However, most of the support for airport operations has continued to come from the Air National Guard. Air Guard base property to be disposed of under the current closure will include conveyance of property included in the long-term lease, as well as other runways and taxiways. Date of closure recommendation: 1991. Date of military mission termination: No active duty missions. Date of base closure: September 1994. Summary of reuse plan: The Air Force will retain 203 acres for use by the Air National Guard, and it will transfer 164 acres to the Army for use by the Army National Guard and Reserves. The remaining 1,648 acres will be an airport public benefit transfer to the port authority. Golf course: None. The golf course was disposed of when the Air Force base was closed in 1979. Implementation status: Final property screening of acreage and buildings under the McKinney Homeless Assistance Act was completed, and no formal homeless requests have been received. The public comment period on the environmental impact statement has concluded, and the statement was issued in February 1995. A final decision on property disposition was signed in May 1995. The port authority has been having difficulty attracting sufficient tenants to support airport operations. It currently receives an annual $3 million subsidy from the county. However, a local official reported that the port authority recently has had much greater success in attracting businesses. Civilian jobs lost due to closure: 1,129. Civilian jobs created as of 3/31/95: 8 (these jobs are related to caretaker operations). The Federal Aviation Administration grants were awarded for planning and airport improvements. In addition, prior to the 1991 closure decision, the Federal Aviation Administration had provided grants totaling $13.5 million to help develop civilian airport facilities. National Priorities List site: No. Contaminants: Pesticides, paint, spent fuel, waste oil, solvents, and heavy metals. Estimated cleanup cost: $41.7 million. Estimated date cleanup complete or remedy in place: June 1997. Base description: The depot is located on 487 acres in an industrial area, 7 miles southeast of downtown Sacramento. The depot first occupied its present site in 1945. Date of closure recommendation: 1991. Date of military mission termination: March 1994. Date of base closure: March 1995. Summary of reuse plan: DOD plans to retain 80 acres for use by the Army Reserve and the Navy. The Department of Health and Human Services has approved requests by homeless assistance providers for 28 acres of property, including warehouse and cold storage space for food distribution to homeless groups. The city opted for an alternative to another approved request from a homeless provider for two buildings on either side of the main administration building. Adopting the view that the operation of a homeless facility in the location would likely disrupt the economic development plan, the city instead agreed to fund the acquisition of facilities elsewhere for the homeless provider. According to a city official, the increased property tax revenue from economic development at the depot is expected to more than offset the cost of the relocation. California State University Sacramento is receiving about 8 acres for a manufacturing technology center. The remaining 371 acres have been transferred to the city of Sacramento through an economic development conveyance. Under the terms of the conveyance, the city will pay the Army $7.2 million for the property after 10 years. Golf course: None. Implementation status: Army officials consider the depot to be a model of successful fast-track efforts to clean up contaminants, convert facilities to civilian use, and create jobs at a closing base. Central to this success was the city’s ability to convince Packard Bell to locate its computer manufacturing operations at the depot. Key factors contributing to the company’s decision were the state’s approval of an enterprise zone, which enabled the company to qualify for tax breaks, and the city’s offer to finance renovation costs at the base. The city is financing $17 million in renovation costs to be covered by lease payments from Packard Bell. The city is allowing Packard Bell to sublease some of the property it has received and use the proceeds to help with renovation costs. Packard Bell has an option to buy the 269 acres it is leasing from the city for $8.9 million. Local officials expect the Packard Bell move to Sacramento will create 2,500 to 3,000 direct manufacturing jobs and up to 2,500 additional jobs for suppliers in the area. The total more than offsets the jobs lost due to depot closure. Civilian jobs lost due to closure: 3,164. Civilian jobs created as of 3/31/95: 630. National Priorities List site: Yes, the base is expected to be removed from the list in June 1996. Contaminants: Waste oil and grease, solvents, metal plating wastes, and wastewaters containing caustics, cyanide, and metals. Estimated cleanup cost: $62.4 million. Considerable progress has been made in base cleanup so that the property being transferred to Packard Bell was suitable for transfer. Some additional base cleanup activities have been slowed by a contract award bid protest. Estimated date cleanup complete or remedy in place: June 1996. Base description: The station is located on 1,620 acres in the Orange County town of Tustin south of Los Angeles. It was first commissioned in 1942 and was used to support observation blimps and personnel conducting antisubmarine patrols off the coast during World War II. It was decommissioned in 1949 but reactivated in 1951 and used solely for helicopter operations. DOD’s estimate of revenues from the disposal of property at the station is higher than for any other 1988 or 1991 base closure. Date of closure recommendation: 1991. Estimated date of military mission termination: June 1997. The 1993 BRAC Commission redirected the planned relocation of Tustin military missions, which resulted in a delay in terminating these missions at Tustin. Estimated date of base closure: July 1997. Summary of reuse plan: DOD plans to retain 10 acres for the Army Reserves. The city has agreed to include in its reuse plan about 38 acres for homeless assistance programs, including family and single housing units and a facility to be used for a children’s shelter. The plan calls for 219 acres to be educational public benefit transfers for public schools and an educational coalition involving the community college. In addition, public benefit transfer for parks and recreation will total 103 acres. The current reuse plans call for 1,142 acres of the base property to be an economic development conveyance with terms to be negotiated. The remaining 108 acres are undetermined. Disputes have arisen about additional federal requests: 12 acres by the Army Reserves, 25 acres by the Air National Guard, and 55 acres of housing, consisting of 274 family housing units, by the Coast Guard. These requests for property with high-market value are opposed by the community or Marine Corps headquarters or both. Other acreage requested by two Indian groups and a local homeless services’ coalition also conflict with local reuse plans. Golf course: None. Implementation status: The local authority has completed its reuse plan, and preparation of the environmental impact statement based on the plan is underway. DOD granted a request from the authority to delay the federal screening decision. The authority is concerned that if too much of the property is given to federal and homeless assistance agencies, the local tax base will be diminished and will be insufficient to support the many infrastructure improvements that are needed to develop the base, such as construction of new roads. One local official estimated that these infrastructure improvements will cost about $200 million, which will reduce the estimated revenue from developing base property. The authority has agreed that homeless assistance requests will be incorporated into the community plan in accordance with the Base Closure Community Redevelopment and Homeless Assistance Act of 1994. Homeless requesters want more property than has been agreed to by the authority. Determination of how much property will go to homeless requesters awaits a final decision on how much property will be transferred to federal entities. Resolution of the Indian requests for property at Tustin is on hold pending clarification at the federal level of where such requests should fit in the property screening process. Civilian jobs lost due to closure: 348. Civilian jobs created as of 3/31/95: Base not yet closed. National Priorities List site: No. Contaminants: Dichloroethane, naphthalene, pentachlorophenol, petroleum hydrocarbons, trichloroethylene, benzene, toluene, ethylbenzene, and xylene. Estimated cleanup cost: $86.2 million. Estimated date cleanup complete or remedy in place: November 1999. Base description: The center is located on 840 acres in Warminster, a populated suburban area about 20 miles north of the Philadelphia city center. The facility includes an airport, as well as office and research space. The Navy acquired the facilities in 1944 from Brewster Aeronautical Corporation, which manufactured aircraft during World War II. The facility has served as the principal naval research, development, and evaluation center for aircraft, airborne antisubmarine warfare, and aircraft systems other than aircraft-launched weapon systems. Date of closure recommendation: 1991. Estimated date of military mission termination: July 1996. Estimated date of base closure: September 1996. Summary of reuse plan: The Navy planned to retain 100 acres, including its dynamic flight simulator (centrifuge) and its inertial navigation laboratory, leaving 740 acres for reuse. The community has decided that it does not want to reuse the center as an airport. Instead, the community hopes to attract research facilities to the site. Discussions are also underway with a consortium of eight universities for a satellite campus, and the school district is interested in obtaining property for a new junior high school. County homeless assistance providers may also be interested in obtaining some center property. The community finalized its reuse plan in February 1995, which emphasizes public benefit and economic development transfers. Parks and recreation will account for approximately 296 acres, economic development conveyance 296 acres, education 67 acres, and the homeless 7 acres. The reuse authority has not developed a plan for how the remaining 74 acres will be disposed of, but has earmarked 44 acres for residential and 30 acres for municipal use. DOD has recommended to the 1995 BRAC Commission that the 100 acres the Navy was retaining also be closed. According to a base official, if the Commission approves this recommendation, this property will likely be added to the economic development conveyance. Golf course: None. Implementation status: The closure process is on schedule, and environmental remediation measures are expected to be in place by the time the base closes in 1996. Civilian jobs lost due to closure: 1,979. Civilian jobs created as of 3/31/95: Base not yet closed. The Federal Lands Reuse Authority of Bucks County, Pennsylvania, plans to establish a 35,000-square foot business incubator program in hangar and office space. According to a base official, the Economic Development Administration has promised a future grant of over $2 million to assist this program. National Priorities List site: Yes. Contaminants: Firing range wastes, fuels, heavy metals, industrial wastewater sludges, nonindustrial solid wastes, paints, polychlorinated biphenyls, sewage treatment sludge, solvents, unspecified chemicals, and volatile organic compounds. Estimated cleanup cost: $11.1 million. Estimated date cleanup complete or remedy in place: September 1996. Base description: Williams is located on 4,043 acres in Mesa, which is in the Phoenix metropolitan area. It was activated in 1941 as a flight training school, and pilot training was the base’s primary mission throughout its history. Date of closure recommendation: 1991. Date of military mission termination: January 1993. Date of base closure: September 1993. Summary of reuse plan: The reuse plan calls for the base to be converted into a civilian airport and for a consortium of educational and job training programs involving Arizona State University and Maricopa Community College. The local authority is to receive a 2,547-acre airport public benefit transfer. The colleges are to receive 657 acres through an educational public benefit transfer. This transfer would include the housing for the campus and the hospital, which would be operated jointly by Arizona State University and the Veterans Administration. The housing units will be leased until the university students occupy them. Two homeless providers will receive 42 acres, including 88 housing units and a chapel. The Army Reserve will receive 11 acres and the National Weather Service 1 acre. The Air Force will convey 16 acres as a public benefit transfer for public health purposes. The Air Force currently plans to sell the remaining 769 acres. The Gila River Indian Community is to receive the 158-acre golf course and an additional 144 acres through a negotiated sale. The remaining 467 acres, including property the local authority wanted to support the airport, is slated for negotiated sale. However, Public Law 102-484 authorized the Air Force to do a land exchange with the state of Arizona, whereby some of this property at Williams would be given to the state in exchange for about 85,000 acres of rangeland that the Air Force leases from the state. The Air Force has not acted on this prerogative, and the local authority does not favor property at Williams being conveyed to the state. Golf course: The golf course will be a negotiated sale to the Gila River Indian Community. Implementation plan: Negotiations are ongoing between the local airport authority, the education consortium, the homeless coalition, the Gila Indians, the Federal Aviation Administration, and the Air Force over property disposition issues and details. The airport authority and the Gila Indians are negotiating over possible Gila partnership in the airport authority. The education and job training programs are underway, with enrollment of over 600 students expected for the fall of 1995. Civilian jobs lost due to closure: 781. Civilian jobs created as of 3/31/95: 368. The Economic Development Administration grant was awarded to the city of Mesa to fund the educational consortium plan, a land use and economic development plan, and a transportation plan. The Federal Aviation Administration grants included $125,000 for developing an airport master plan and $2,893,000 for facilities and equipment. National Priorities List site: Yes. Contaminants: Volatile organic compounds, waste solvents, fuels, petroleum/oil/lubricants, and heavy metals. Estimated cleanup cost: $42.7 million. Estimated date cleanup complete or remedy in place: December 1997. Base description: The facility is located on 580 acres, 25 miles south of Washington, D.C. It is bounded on the west by the Marumsco National Wildlife Refuge and consists of some laboratory buildings and a wetlands area. The Army acquired the property in 1951 for use as a military radio station. The facility became inactive in 1969. In 1971, it became a satellite installation of the Harry Diamond Army Research Laboratory at Adelphi, Maryland. Date of closure recommendation: 1991. Date of military mission termination: September 1994. Date of base closure: September 1994. Summary of reuse plan: The Army plans to transfer the entire 580 acres at no cost to the Department of the Interior to be incorporated into the Fish and Wildlife Service’s Mason Neck Wildlife Refuge. An earlier community plan had called for the developed portion of the facility to be conveyed to the community for a regional employment center and environmental education, but August 1994 legislation gave the entire property to Interior. Golf course: None. Implementation status: No date has been established for transferring the facility to the Department of the Interior. At present, the facility remains under Army stewardship and continues to be maintained in a caretaker status. The Army is continuing the environmental restoration program at the base, and it will remain responsible for remediation activities until completion. According to a base official, Interior is reluctant to sign for ownership of the property because it lacks operations and maintenance funds to care for the property and upgrade or demolish buildings, particularly until the lease expires on space occupied by local Fish and Wildlife Service staff in the nearby community. Furthermore, Interior is reluctant to assume ownership until the cleanup is complete due to concern that DOD’s environmental restoration budget will be cut, leaving insufficient DOD funds to complete the cleanup. The operator of a homeless assistance seed distribution program has a no-cost temporary lease from the Army for a small warehouse operation at the base. It must make arrangements with Interior, if it wants to continue this activity after the transfer occurs. Civilian jobs lost due to closure: 90. Civilian jobs created as of 3/31/95: Not available; the property is being retained by a federal agency. National Priorities List site: No. Contaminants: Polychlorinated biphenyls, petroleum products. Estimated cleanup costs: $4.1 million. Other potential contaminants include ethylene glycol from a previous research and development activity and possible heavy metals in soils from past sewage sludge injection activities. Site investigation and sampling activities are continuing to confirm or deny potential remediation sites. Estimated date cleanup complete or remedy in place: April 1997. Base description: Wurtsmith is located in northeast Michigan on the coast of Lake Huron in the township of Oscoda. It is located on 2,205 acres of Air Force property and 2,995 acres of land leased from the state, the Forest Service, and the local power company. The base was initially established in 1924 and used as an Army Air Service gunnery range. It was closed in 1945, then reactivated in 1947. In 1958, the base was expanded to host a Strategic Air Command unit. Date of closure recommendation: 1991. Date of military mission termination: December 1992. Date of base closure: June 1993. Summary of reuse plan: The plan calls for 2 acres to be transferred to the Fish and Wildlife Service. Public benefit transfers will include 1,700 acres for a civilian airport, 15 acres for parks, 10 acres for an educational consortium, and 5 acres for a health facility. Two homeless assistance providers are requesting about 7 acres of property, including 9 family housing units and a 72-bed dormitory. The local authorities are planning to request the remaining 466 acres, including housing units, utilities, and property available for commercial development. Since Wurtsmith is a qualifying rural area, it may be a no-cost economic development conveyance. The Chippewa Indian tribe has expressed interest in buildings for a casino, as well as some base housing, but it had not made a formal request at the time of our review. Golf course: None. Implementation status: As of December 1994, the airfield facilities were being operated on a 30-year, long-term lease. Under the lease agreement, local authorities gave up the right to restoration, which otherwise would have required the Air Force to remove unwanted buildings and a runway from land originally leased from the state. The Air Force will continue to handle caretaker costs for the rest of the base. The local authority is subleasing some of the facilities to an aircraft remanufacturer, which has created over 200 jobs. A final decision on disposition of the remaining property cannot be reached until decisions are made on requests from the homeless assistance providers and the Indian tribe. Civilian jobs lost due to closure: 705. Civilian jobs created as of 3/31/95: 553. The Economic Development Administration granted Iosco County $7,717,500 for infrastructure improvements and other assistance, including funds to connect the base to municipal water and wastewater systems and to improve and expand the capacity of those systems to handle the increased load. The grant also included $375,000 for marketing and promotion and $750,000 for technical assistance to survey and subdivide the property and map public streets and the utility lines. The Economic Development Administration granted the county an additional $2 million to establish a revolving loan fund for financing the expansion of existing businesses and for attracting new businesses to the area. The Federal Aviation Administration grants were for airport facilities, equipment, and planning. National Priorities List site: No. Contaminants: Waste fuel and oil, spent solvents, and volatile organic compounds. Estimated cleanup cost: $70 million. Cleanup of groundwater contamination under the housing area will take some time, but base officials hope to have remediation measures in place by 1999. Estimated date cleanup complete or remedy in place: 1999. Recovery (Percent) Davisville Naval Construction Battalion Center Long Beach Naval Station/Naval Hospital Myrtle Beach Air Force Base Philadelphia Naval Station/Naval Hospital/ Naval Shipyard Puget Sound Naval Station (Sand Point) Tustin Marine Corps Air Station (continued) Recovery (Percent) GAO has issued the following reports related to military base closures and realignments: Military Base Closures: Analysis of DOD’s Process and Recommendations for 1995 (GAO/T-NSIAD-95-132, Apr. 17, 1995). Military Bases: Analysis of DOD’s 1995 Process and Recommendations for Closure and Realignment (GAO/NSIAD-95-133, Apr. 14, 1995). Military Bases: Challenges in Identifying and Implementing Closure Recommendations (GAO/T-NSIAD-95-107, Feb. 23, 1995). Military Bases: Environmental Impact at Closing Installations (GAO/NSIAD-95-70, Feb. 23, 1995). Military Bases: Reuse Plans for Selected Bases Closed in 1988 and 1991 (GAO/NSIAD-95-3, Nov. 1, 1994). Military Bases: Letters and Requests Received on Proposed Closures and Realignments (GAO/NSIAD-93-173S, May 25, 1993). Military Bases: Army’s Planned Consolidation of Research, Development, Test and Evaluation (GAO/NSIAD-93-150, Apr. 29, 1993). Military Bases: Analysis of DOD’s Recommendations and Selection Process for Closure and Realignments (GAO/T-NSIAD-93-11, Apr. 19, 1993). Military Bases: Analysis of DOD’s Recommendations and Selection Process for Closures and Realignments (GAO/NSIAD-93-173, Apr. 15, 1993). Military Bases: Revised Cost and Savings Estimates for 1988 and 1991 Closures and Realignments (GAO/NSIAD-93-161, Mar. 31, 1993). Military Bases: Transfer of Pease Air Force Base Slowed by Environmental Concerns (GAO/NSIAD-93-111FS, Feb. 3, 1993). Military Bases: Navy’s Planned Consolidation of RDT&E Activities (GAO/NSIAD-92-316, Aug. 20, 1992). Military Bases: Observations on the Analyses Supporting Proposed Closures and Realignments (GAO/NSIAD-91-224, May 15, 1991). Military Bases: An Analysis of the Commission’s Realignment and Closure Recommendations (GAO/NSIAD-90-42, Nov. 29, 1989). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on reuse planning and implementation at the 37 bases closed in the first two base realignment and closure (BRAC) rounds, focusing on: (1) planned disposal and reuse of the properties; (2) successful property conversions; (3) problems that delay reuse planning and implementation; and (4) assistance provided to communities. GAO found that: (1) under current plans, over half the land will be retained by the federal government because it: (a) is contaminated with unexploded ordinance; (b) has been retained by decisions made by the base realignment and closure commissions or by legislation; and (c) is needed by federal agencies; (2) most of the remaining land will be requested by local reuse authorities under various public benefit transfer authorities or the new economic development conveyance authority; (3) little land will be available for negotiated sale to state and local jurisdictions or for sale to the general public; (4) reuse efforts by numerous communities are yielding successful results; (5) hundreds of jobs are being created at some bases that more than offset the loss in civilian jobs from closures, new educational institutions are being established in former military facilities, and wildlife habitats are being created that meet wildlife preservation goals while reducing the Department of Defense's (DOD) environmental cleanup costs; (6) some communities are experiencing delays in reuse planning and implementation; (7) causes of delays include failure within the local communities to agree on reuse issues, development of reuse plans with unrealistic expectations, and environmental cleanup requirements; (8) the federal government has made available over $350 million in direct financial assistance to communities; (9) DOD's Office of Economic Assistance has provided reuse planning grants, the Department of Labor has provided job training grants, and the Federal Aviation Administration has awarded airport planning and implementation grants; and (10) grants from the Department of Commerce's Economic Development Administration are assisting communities in rebuilding or upgrading base facilities and utilities and are helping communities set up revolving loan funds that can be used to attract businesses to closed bases.
You are an expert at summarizing long articles. Proceed to summarize the following text: In October 1990, the Federal Accounting Standards Advisory Board (FASAB) was established by the Secretary of the Treasury, the Director of the Office of Management and Budget (OMB), and the Comptroller General of the United States to consider and recommend accounting standards to address the financial and budgetary information needs of the Congress, executive agencies, and other users of federal financial information. Using a due process and consensus building approach, the nine-member Board, which has since its formation included a member of DOD, recommends accounting standards for the federal government. Once FASAB recommends accounting standards, the Secretary of the Treasury, the Director of OMB, and the Comptroller General decide whether to adopt the recommended standards. If they are adopted, the standards are published as Statements of Federal Financial Accounting Standards (SFFAS) by OMB and GAO. In addition, the Federal Financial Management Improvement Act of 1996 requires federal agencies to implement and maintain financial management systems that will permit the preparation of financial statements that substantially comply with applicable federal accounting standards. Also, the Federal Managers’ Financial Integrity Act of 1982 requires agency heads to evaluate and report annually whether their financial management systems conform to federal accounting standards. Issued on November 30, 1995, and effective for the fiscal years beginning after September 30, 1997, SFFAS No. 6, Accounting for Property, Plant, and Equipment, requires the disclosure of deferred maintenance in agencies’ financial statements. SFFAS No. 6 defines deferred maintenance as “maintenance that was not performed when it should have been or was scheduled to be and which, therefore, is put off or delayed for a future period.” It includes preventive maintenance and normal repairs, but excludes modifications or upgrades that are intended to expand the capacity of an asset. The deferred maintenance standard applies to all property, plant, and equipment, including mission assets—which will be disclosed on the supplementary stewardship report. For the Department of Defense (DOD), mission assets, such as submarines, ships, aircraft, and combat vehicles, is a major category of property, plant, and equipment. In fiscal year 1996, DOD reported over $590 billion in this asset category, of which over $297 billion belonged to the Navy, including 338 active battle force ships such as aircraft carriers, submarines, surface combatants, amphibious ships, combat logistics ships, and support/mine warfare ships. The Navy spent a little over $2 billion on ship depot maintenance for its active fleet in fiscal year 1996. SFFAS No. 6 recognizes that there are many variables in estimating deferred maintenance amounts. For example, the standard acknowledges that determining the condition of the asset is a management function because different conditions might be considered acceptable by different entities or for different items of property, plant, and equipment held by the same entity. Amounts disclosed for deferred maintenance may be measured using condition assessment surveys or life-cycle cost forecasts.Therefore, SFFAS No. 6 provides flexibility for agencies’ management to (1) determine the level of service and condition of the asset that are acceptable, (2) disclose deferred maintenance by major classes of assets, and (3) establish methods to estimate and disclose any material amounts of deferred maintenance. SFFAS No. 6 also has an optional disclosure for distinguishing between critical and noncritical amounts of maintenance needed to return each major class of asset to its acceptable operating condition. If management elects to disclose critical and noncritical amounts, the disclosure must include management’s definition of these categories. The objective of our work was to identify information on specific issues to be considered in developing implementing guidance for disclosing deferred maintenance on ships. We reviewed financial and operational regulations and documentation related to managing and reporting on the ship maintenance process. The documentation we reviewed included fleet spreadsheets used to track depot-level maintenance requirements and execution by specific ship. We also reviewed Navy Comptroller budget documents. We discussed this information with officials at DOD and Navy headquarters and at various organizational levels within the Department of the Navy. While the deferred maintenance standard applies to all levels of maintenance, this report addresses ship depot-level maintenance because it is the most complicated and expensive. (See the following section for a discussion of the Navy ship maintenance process, including the levels of maintenance.) The amounts for deferred depot level maintenance presented in this report were developed using information provided by Navy managers. We did not independently verify the accuracy and completeness of the data. We conducted our review from July 1996 through November 1997 in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report from the Secretary of Defense or his designee. The Under Secretary of Defense (Comptroller) provided us with written comments, which are discussed in the “Agency Comments” section and are reprinted in appendix I. The Navy accomplishes maintenance on its ships (including submarines) at three levels: organizational, intermediate, and depot. Organizational-level maintenance includes all maintenance actions which can be accomplished by a ship’s crew. For example, the ship’s crew may replace or fix a cracked gasket or leaks around a hatch or doorway aboard ship. Intermediate-level maintenance is accomplished by Navy Intermediate Maintenance Activities (IMAs) for work that is beyond the capability or capacity of a ship’s crew. For example, an IMA performs calibration or testing of selected ship systems for which the ship’s crew may not have the equipment or capability to perform. Depot-level maintenance includes all maintenance actions that require skills or facilities beyond those of the organizational and intermediate levels. As such, depot-level maintenance is performed by shipyards with extensive shop facilities, specialized equipment, and highly skilled personnel to accomplish major repairs, overhauls, and modifications. The Navy determines what depot-level maintenance is needed for its ships through a requirements process that builds from broad maintenance concepts outlined in Navy policy and culminates with the execution of an approved schedule. There are three types of maintenance requirements that are executed: (1) time-directed requirements, (2) condition-based requirements, or (3) modernization requirements. Time-directed requirements are derived from technical directives and include those that are periodic in nature and are based on elapsed time or recurrent operations. Condition-based requirements are based on the documented physical condition of the ship as found by the ship’s crew or an independent inspection team. Lastly, modernization requirements include ship alterations, field changes, and service changes that either add new capability or improve reliability and maintainability of existing systems through design improvements or replacements. Initial depot-level maintenance requirements are determined and a proposed maintenance schedule is developed and approved based on overall ship maintenance policy, specific maintenance tasks, operational requirements, force structure needs, and fielding schedules. These approved maintenance schedules undergo numerous changes as new requirements are identified, others are completed or canceled, operational priorities change, and budgets fluctuate. Thus, these factors result in many deviations from the plan once actual maintenance is executed and complicate the measurement of exactly what maintenance should be considered deferred. Less flexibility in scheduling is permissible with submarines than surface ships because prescribed maintenance must be done on submarines periodically for them to be certified to dive. If the specified maintenance is not done by the time required, the submarine is not to be operated until the maintenance is accomplished. Neither DOD nor the Navy has developed implementing guidance for determining and disclosing deferred maintenance on financial statements. Navy officials said that they are reluctant to develop their procedures until DOD issues its guidance. As we reported to DOD in our September 30, 1997, letter, DOD guidance is important to ensure consistency among the military services and to facilitate the preparation of DOD-wide financial statements. We also stated that the guidance needs to be available as close to the beginning of fiscal year 1998 as possible so that the military services have time to develop implementing procedures and accumulate the necessary data to ensure consistent DOD-wide implementation for fiscal year 1998. We found that operations and comptroller officials from both DOD and the Navy have varying opinions concerning the nature of unperformed maintenance that should be reported as “deferred.” The differences in opinions arise from various interpretations of how to apply the standard to the maintenance process. The views on how to apply the deferred maintenance standard to the ship maintenance process ranged from including only unfunded ship overhauls to estimating the cost of repairing all problems identified in each ship’s maintenance log. Brief descriptions of various views of how SFFAS No. 6 could be applied to disclosing deferred depot-level maintenance for ships follow. The descriptions explain what would be considered deferred maintenance for ships and the rationale for each option. In its budget justification documents, the Navy reports deferred depot-level maintenance for unfunded ship overhauls. The Navy Comptroller officials’ rationale for excluding other types of depot-level maintenance not done is that overhauls represent the Navy’s top priority for accomplishing ship depot-level maintenance and, therefore, should be highlighted for the Congress when a lack of funds prevents them from occurring when needed. While overhauls consumed most of the depot-level maintenance funding in past years, the Navy is performing fewer overhauls as it moves toward a more incremental approach of doing smaller amounts of depot-level work more frequently. Consequently, overhauls now represent a relatively small part of the Navy’s ship depot-level maintenance budget. In fiscal year 1996, over 80 percent of the Navy’s ship depot-level maintenance budget was spent on work other than ship overhauls. Specifically, the Navy reported spending almost $1.7 billion for other ship depot-level maintenance and $367.8 million for ship overhauls. The Navy officials’ rationale for disclosing only unfunded overhauls as deferred depot-level maintenance in financial statements is that the data are readily available and are consistent with what is being reported in budget justification documents. However, this view omits all other types of scheduled depot-level maintenance not done and clearly does not meet the intent of SFFAS No. 6. FASAB addressed the deferred maintenance issue because of widespread concern over the deteriorating condition of government-owned equipment. FASAB reported that the consequences of underfunding maintenance (increased safety hazards, poor service to the public, higher costs in the future, and inefficient operations) are often not immediately reported and that the cost of the deferred maintenance is important to users of financial statements and key decisionmakers. Using this option, the amount disclosed for fiscal year 1996 (the most recent fiscal year data available) would have been $0. Both Atlantic and Pacific fleet officials monitor deferred ship depot-level maintenance and report these backlog amounts to the Navy Comptroller although these amounts are not reported in the Navy’s budget justification documents. These fleet backlog reports quantify the ship depot-level maintenance work that should have been performed by the end of the fiscal year according to the Chief of Naval Operations (CNO) but was not done and was not rescheduled. The rationale for using the amounts on the fleet backlog reports for financial statement reporting is that the data are readily available, and it is a more realistic representation of deferred maintenance than just the unfunded ship overhauls. Using this option, the amount disclosed in the Navy’s financial statements for fiscal year 1996 would have been about $117.5 million. However, the fleet backlog reports do not include any depot-level work rescheduled to future years. Under one approach, the estimated value of work rescheduled beyond the ship’s approved maintenance schedule time frames, as established by the CNO, would also be disclosed. The rationale for adding the estimated value of work rescheduled beyond these time frames is that the CNO Notice provides the Navy’s established requirements for accomplishing ship depot-level maintenance; therefore, any work rescheduled beyond the specified time frames should be considered deferred. For example, maintenance work on two Pacific Fleet destroyers was rescheduled beyond the CNO-specified time frames of June and July 1996, respectively, to October 1996. On the other hand, maintenance on two Atlantic Fleet submarines was rescheduled from the end of one fiscal year to early the next fiscal year but still within CNO-specified time frames. Under this option, the estimated value of the maintenance work rescheduled to the next fiscal year on the destroyers would be recognized as deferred maintenance at the end of the fiscal year. However, the value of the rescheduled work on the submarines would not be recognized because it was still to be performed within the CNO-specified time frames. Under this option, using Navy data, the amount disclosed for fiscal year 1996 would have been about $15.1 million greater or $132.6 million. Another option discussed with Navy officials would be to modify the fleet backlog reports to include the estimated value of any scheduled maintenance work not accomplished during the fiscal year, regardless of the CNO-specified time frames. Under this approach, the estimated value of work on the two submarines discussed above would also be recognized as deferred maintenance. The rationale for this option is that any scheduled work moved to the next fiscal year should be disclosed as deferred maintenance at the end of the fiscal year when the scheduled maintenance was to be performed. Under this option, using Navy data, the amount disclosed for fiscal year 1996 would have been about $188.5 million. Another view discussed with Navy officials for disclosing deferred ship maintenance is to report the costs to perform the needed work on all items listed on each ship’s maintenance log at the end of the fiscal year. The rationale for using this source is that the log may more completely capture all levels of maintenance needed on each ship. Depending on the size and condition of the ship, the maintenance log could contain only a few items or many thousands. However, the Navy does not routinely determine the cost of items that appear on a ship’s maintenance log. Further, although these logs are supposed to be up-to-date and routinely checked for accuracy and completeness, Navy fleet officials stated that estimating the cost to repair the items on each ship’s log would be very time-consuming and costly because maintenance tasks that are accomplished are not routinely deleted from the log, and the time estimates contained in the logs may be inaccurate. Nevertheless, officials said that using the estimated value of all items listed on each ship’s maintenance log would exceed any of the above estimates due to the sheer volume of items included. As discussed in our earlier report, implementing guidance is needed so that all military services consistently apply the deferred maintenance standard. As a result of the variations in the way the deferred maintenance standard can be applied to ships (including submarines), DOD and the Navy need to consider a number of issues, including the following. Acceptable asset condition - SFFAS No. 6 allows agencies to decide what “acceptable condition” means and what maintenance needs to be done to keep assets in that condition. Determining acceptable operating condition could be in terms of whether (1) the ship can perform all or only part of its mission, (2) the most important components of the ship function as intended, (3) the ship meets specified readiness indicators, or (4) the ship and/or its major components meets some other relevant criteria determined by management. The determination may also be influenced by whether the ship is currently deployed or scheduled to be deployed in the near future. An example of the acceptable operating condition issue is as follows. Each ship is composed of many systems, and those systems critical to the ship’s ability to meet its operational commitments and achieve high readiness scores (such as the weapons systems) rarely have maintenance deferred. On the other hand, maintenance on the ship’s distributive systems (such as the ship’s pipes and hulls) are more likely to be deferred since this has little direct impact on the ship’s readiness indicators. Therefore, the question is whether needed maintenance not performed on the distributive systems, should be disclosed as deferred maintenance since it has little impact on the ship’s readiness scores but could affect the ship’s long-term viability. Timing of deferred maintenance recognition - Each ship class has standard operating intervals between visits to the depot; however, changes to this plan may take place as the scheduled maintenance approaches (except for certain maintenance requirements for the submarines and aircraft carriers which have mandated maintenance intervals to meet safety requirements) due to operational considerations, funds available, and condition-based inspections. To ensure that meaningful, consistent data are provided, DOD and the military services will need to decide which one of the many possible alternatives will be used to determine when maintenance needed but not performed is considered deferred. The timing issue involves what needed maintenance should be recognized as deferred as of the end of the fiscal year—the date specified in the CNO Notice, the date the maintenance needs were identified, or the date the maintenance was scheduled. Applicability of the reporting requirements - DOD and the military services will need to determine whether deferred maintenance should be reported for assets that are not needed for current requirements. For example, should maintenance deferred on ships being considered for decommissioning or not scheduled for deployment for a significant period be recognized on DOD’s and the Navy’s financial statements? Reporting the maintenance not done as deferred would more accurately reflect how much it would cost to have all reported assets in an acceptable operating condition; however, it would also be reporting maintenance which is not really needed at this time and which may never be needed or done. Critical and noncritical deferred maintenance - If critical versus noncritical deferred maintenance is to be disclosed, such a disclosure must be consistent among the services, and critical must be defined. For example, different kinds of maintenance needed—from preventive to urgent for continued operation—may be used to differentiate between critical and noncritical. Also, if DOD chooses to disclose deferred maintenance for all reported assets, including maintenance on assets not needed for current requirements, identifying the types of assets included in the deferred maintenance disclosure may be another way to differentiate between critical and noncritical. Although our work focused on the depot level, the deferred maintenance standard applies to all maintenance that should have been done, regardless of where the maintenance should have taken place. Therefore, in addressing the issues in this report and others regarding deferred maintenance, all levels of maintenance must be considered. In comments on a draft of this report (see appendix I), the Department of Defense agreed that it must consider the key issues identified in this report as it implements deferred maintenance reporting requirements. We are sending copies of this letter to the Chairmen and Ranking Minority Members of the Senate Committee on Appropriations, the House Committee on Appropriations, the Senate Committee on Armed Services, the House Committee on National Security, the Senate Committee on Governmental Affairs, and the House Committee on Government Reform and Oversight. We are also sending copies to the Director of the Office of Management and Budget, the Secretary of Defense, the Assistant Secretaries for Financial Management of the Air Force and Army, and the Acting Director of the Defense Finance and Accounting Service. Copies will be made available to others upon request. Please contact me at (202) 512-9095 if you or your staffs have any questions concerning this letter. Cleggett Funkhouser, Merle Courtney, Chris Rice, Rebecca Beale, and John Wren were major contributors to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Department of Defense's (DOD) implementation of the requirement for valuable information related to deferred maintenance on mission assets, focusing on Navy ships, including submarines. GAO noted that: (1) the development of DOD and Navy policy and implementing guidance for deferred maintenance is essential to ensure consistent reporting among the military services and to facilitate the preparation of accurate DOD-wide financial statements, particularly since the new accounting standard provides extensive management flexibility in implementing the disclosure requirement; (2) Navy officials stated that they were reluctant to develop procedures to implement the required accounting standard until DOD issues overall policy guidance; (3) DOD and Navy officials have expressed numerous views as to how to apply the deferred maintenance standard to ships; (4) this makes it even more important for clear guidance to be developed; (5) the opinions ranged from including only unfunded ship overhauls to including cost estimates of repairing all problems identified in each ship's maintenance log; (6) in formulating the DOD and Navy guidance, key issues need to be resolved to allow for meaningful and consistent reporting within the Navy and from year to year including: (a) what maintenance is required to keep the ships in an acceptable operating condition; and (b) when to recognize as deferred needed maintenance which has not been done on a ship; and (7) in addition, DOD needs to address in its implementing guidance whether the: (a) deferred maintenance standard should be applied to all or only certain groups of assets, such as ships being deactivated in the near future; and (b) reported deferred maintenance should differentiate between critical and noncritical and, if so, what constitutes control.
You are an expert at summarizing long articles. Proceed to summarize the following text: Federal employees, by law, are entitled to receive fair and equitable treatment in employment without regard to their sex, among other things. In addition, any federal employee who has the authority to take, recommend, or approve any personnel action is prohibited from discriminating for or against any employees or applicants for employment on the basis of their sex. These rights are set forth in title VII of the Civil Rights Act of 1964, as amended, and the Civil Service Reform Act of 1978. In 1980, the Equal Employment Opportunity Commission (EEOC) issued regulations recognizing sexual harassment as an unlawful employment practice. Subsequent case law clarified that unlawful sexual harassment exists when unwelcome sexual advances, requests for sexual favors, or other verbal or physical conduct of a sexual nature are committed as a condition of employment or basis for employment action (“quid pro quo”), or when this conduct creates a hostile work environment. A key word is “unwelcome,” because unlawful sexual harassment may exist when the target perceives that he or she is being harassed, whether or not the perpetrator intended to create a hostile environment. EEOC has the authority to enforce federal sector antidiscrimination laws, issuing rules and regulations as it deems necessary to carry out its responsibilities. It issued revised guidelines for processing EEO complaints, including sexual harassment, that became effective in October 1992. NIH is one of several Public Health Service agencies within HHS and is the principal biomedical research agency of the federal government. It supports biomedical and behavioral research domestically and abroad, conducts research in its own laboratories and clinics, trains researchers, and promotes the acquisition and distribution of medical knowledge. NIH is made up of 26 ICDs, each of which has its own director and management staff. Its 13,000 employees are primarily located in the Bethesda, Maryland, area. Our objective was to obtain information on the extent and nature of sexual harassment and sex discrimination at NIH, to provide a systematic overview of an issue that had received media attention based on individual allegations. To accomplish this, we reviewed sexual harassment and sex discrimination complaints filed by NIH employees and conducted a projectable survey of NIH employees. We also interviewed agency officials at NIH, the Public Health Service, and HHS involved with handling such situations in order to familiarize ourselves with EEO-related activities. We obtained statistics on formal sexual harassment and sex discrimination complaints that were filed between October 1, 1990, and May 31, 1994, and reviewed those complaints filed during this period and subsequently closed. We also reviewed 20 complaints that were handled as part of NIH’s expedited sexual harassment process between September 1, 1992, and May 31, 1994. Under this accelerated procedure, officials from the involved ICD were required to immediately advise OEO officials about any sexual harassment allegations that came to their attention. OEO was then required to complete its inquiry within 2 weeks. NIH’s EEO complaint process is outlined in greater detail in appendix I. We did not compare the number and type of complaints filed by NIH employees with those filed by employees at other governmental institutions. To obtain an agencywide perspective on the sexual harassment and sex discrimination environment at NIH, we sent questionnaires to a stratified random sample of 4,110 persons who were NIH employees as of the end of fiscal year 1993. We asked these employees for their insights, opinions, and observations (anonymously) about sexual harassment and sex discrimination at NIH as well as their opinions about NIH’s EEO system. The results of our survey, which can be projected to the universe from which it was selected, are shown in their entirety in appendix II. The overall usable response rate was 64.3 percent. The percentages presented in this report are based on the number of NIH employees who responded to the particular question being discussed. Because the survey results come from a sample of NIH employees, all results are subject to sampling errors. For example, the estimate that 32 percent of the employees have experienced sexual harassment is surrounded by a 95 percent confidence interval from 30 to 34 percent. All of the survey results in this report have 95 percent confidence intervals of less than + 5 percent unless otherwise noted. All reported comparisons of female and male responses are statistically significant unless otherwise noted. It should be noted that our questionnaire methodology, which is described in greater detail in appendix III, did not include comparing NIH with other governmental institutions. We also contacted agency officials at NIH, the Public Health Service, and HHS to obtain estimated costs associated with processing sexual harassment and sex discrimination complaints. Information regarding the limited data that were available is covered in appendix IV. Our work was done at NIH’s Bethesda, Maryland, location from May 1993 to May 1995, in accordance with generally accepted government auditing standards. We requested comments from the Secretary, HHS; the Assistant Secretary for Health, HHS; and the Director, NIH on a draft of this report. Their consolidated comments are discussed on p. 16 and presented in appendix V. Approximately 32 percent of NIH employees reported that they were the recipients of some type of uninvited, unwanted sexual attention in the past year, and employees filed 32 informal complaints and 20 formal complaints with NIH’s OEO between October 1990 and May 1994. These complaints were filed primarily by female employees. Closed formal complaints we reviewed overwhelmingly identified immediate supervisors and/or management officials as the alleged harassers. However, employees in general did not consider these groups to be the only sources of sexual harassment at NIH. Coworkers and contractors were also identified as alleged harassers. Actions reportedly taken most often by sexually harassed employees to deal with their situations included ignoring the situation or doing nothing, avoiding the harasser, asking/telling the harasser to stop the offensive behavior, discussing the situation with a coworker and/or asking the coworker to help, or making a joke of the situation. Over 96 percent of NIH employees who said they were sexually harassed reported that they decided not to file complaints or take some other personnel action. Some of the more prevalent reasons employees gave for choosing not to file EEO complaints, grievances, or adverse action appeals were that (1) they did not consider the incident to be serious enough, (2) they wanted to deal with it themselves, and/or (3) they decided to ignore the incident. Also, some of the employees who chose not to file complaints believed the situation would not be kept confidential, the harasser would not be punished, filing a complaint would not be worth the time or cost, and/or that they would be retaliated against. Although it remains small as a proportion of the workforce, the number of EEO complaints filed by NIH employees alleging sexual harassment has increased in recent years. Of the 20 formal complaints filed between October 1, 1990, and May 31, 1994, none were filed in fiscal year 1991; 4 and 7 were filed in fiscal years 1992 and 1993, respectively; and 9 were filed during the first 8 months of fiscal year 1994. Although 53 percent of employees reported they thought NIH did a somewhat good to very good job taking action against employees who engage in sexual harassment, about 27 percent of employees reported they thought NIH did a somewhat poor to very poor job. (See app. II, p. 31.) Our review of sexual harassment complaint files and statistics showed that no determinations or findings of sexual harassment had been made on formal EEO complaints filed by NIH employees that were closed between October 1991 and May 1994. It should be noted, however, that actions could be and have been taken against alleged harassers without a formal admission that harassment actually occurred. For the most part, employees reported they believed NIH was doing a good job of informing them about the nature of sexual harassment, the policies and procedures prohibiting it, and the penalties for those who engage in sexual harassment. NIH also got good reviews from its employees for encouraging them to contact ICD EEO officers and/or OEO regarding any sexual harassment concerns. Only 5.5 percent of employees viewed sexual harassment to be more of a problem at NIH than it was a year earlier, and 34.5 percent of the employees did not perceive sexual harassment to be a problem at all at NIH. However, many employees perceived NIH as doing a poor job of counseling victims of sexual harassment (20.8 percent), preventing reprisal/retaliation for reporting sexual harassment (22.2 percent), and taking action against those who harassed others (26.9 percent). With regard to their respective ICDs, 2.3 percent of the employees believed the problem had become more serious while 52.2 percent of employees did not consider sexual harassment to be a problem at their ICDs. (See table 1.) Two-thirds of the employees—67.1 percent—believed enough was being done by NIH to eliminate sexual harassment. This sentiment was echoed by 72.3 percent of employees about their respective ICDs and 74.7 percent of employees about their immediate supervisors. (See app. II, p. 23.) Women reported being harassed more often than men (37.7 percent compared to 23.8 percent), and women employees at NIH perceived sexual harassment to be a more serious problem than did men (21.3 percent compared to 8.2 percent). Male and female employees who said they experienced sexual harassment indicated that most of the uninvited, unwanted sexual attention consisted of gossip regarding people’s sexual behavior; sexual jokes, remarks, and teasing; and negative sexual remarks about a group (e.g., women, men, homosexuals). For the most part, employees reported that it was instigated by coworkers, supervisors, and/or contractors who worked on the NIH campus. Very few employees said that the sexual harassment they experienced included receiving or being shown nude or sexy pictures (4.8 percent); being pressured for a date (4 percent); receiving requests or being pressured for sexual favors (1.5 percent); receiving letters, phone calls, or other material of a sexual nature (1.4 percent); and threatened, attempted, or actual rape or sexual assault (0.4 percent). The employees who made these claims also said these situations had not occurred repeatedly—once or twice during the last year. (See app. II, p. 25.) Thirteen percent of NIH employees indicated to us that they believed they had experienced sex discrimination over the last 2 years. Of the 13 percent, approximately half chose to take some type of action regarding their situation. Many of these employees said they came forward and discussed their experiences with an EEO official, their immediate supervisor, and/or some other non-EEO official. However, about 10 percent of employees who alleged discrimination reported that they took the next step and filed an EEO complaint, grievance, or adverse action appeal with the appropriate NIH office. Some of the more prevalent reasons why employees chose not to file actions were concerns that they would not be treated fairly, that filing a complaint would not be worth the time or cost, that they would be retaliated against, that the situation was not serious enough, and/or that the situation would not be kept confidential. Many employees also decided to ignore the situation or to try to deal with their situations themselves. Between October 1990 and May 1994, 209 informal and 111 formal sex discrimination complaints were filed by female and male employees at NIH. Formal complaints that were closed during this time period were filed for multiple reasons, the most common being nonselection for promotion, lack of promotion opportunity, and objection to job evaluation ratings. The alleged discriminators were people with authority over the complainants and could therefore alter the conditions under which the complainants worked. Within NIH, more than half of the women employees (58.4 percent) said they believed the current sex discrimination situation to be as much of a problem as it was 1 year earlier, and 37 percent of the men said the same. Although the percentages were small, a larger percentage of men (7.2 percent) than women (6.1 percent) considered the problem to be at least somewhat worse. Also, 30.6 percent of male employees did not perceive sex discrimination to be a problem at NIH, a belief echoed by only 17.6 percent of female employees. (See fig. 1.) Men and women were divided, even within their own gender groups, in their belief as to whether NIH was doing enough to eliminate sex discrimination in the workplace. While the majority of men believed NIH was doing enough (71 percent), a number of men disagreed (17 percent). Women’s views were also divided—about 48 percent of the women expressed the view that NIH was doing enough to eliminate sex discrimination, but 33 percent disagreed. Many NIH employees reported they believed women and men were not given comparable opportunities and rewards at their ICDs. Approximately one out of five employees (20.2 percent) did not believe that women and men at NIH were paid the same for similar work or that men and women were formally recognized for similar performance at the same rate (19.7 percent). Nearly one out of three employees (30.1 percent) reported they did not believe men and women were promoted at the same rate when they had similar qualifications. A number of employees also reported they observed that women and men at NIH did not have similar opportunities for visibility (15.5 percent) or similar success finding mentors (22.8 percent), nor did they get equally desirable assignments (19.0 percent). About 44 percent of the employees reported they believed family responsibilities kept women at NIH from being considered for advancement more than they did for men and about 50 percent expressed the view that an “old boy network” prevented women at NIH from advancing in their careers. For each of these topics, female employees responded more strongly than their male counterparts, and the differences in their responses are statistically significant at the 95 percent confidence level. About 35 percent of employees reported they thought NIH did a somewhat poor to very poor job taking action against employees who engaged in sex discrimination. Our review of sex discrimination complaint files and statistics showed that no determinations or findings of sex discrimination had been made on formal EEO complaints filed by NIH employees that were closed between October 1991 and May 1994. It should be noted, however, that actions could be and have been taken against alleged discriminators without a formal admission that discrimination actually occurred. Although the management of NIH is highly decentralized, with each ICD largely responsible for its own management, the controversies that emerged in 1991 and 1992 over sex discrimination, sexual harassment, and racial discrimination were directed at the NIH Director, who was expected to address them on an agencywide basis. Partly in response to these controversies, NIH management has, in recent years, taken actions aimed at improving the agency’s EEO climate. Beginning with the fiscal year 1993 rating period, EEO became a critical element on managerial performance ratings and can have an impact on overall ratings and determinations of pay increases. NIH management also issued policy statements to employees and managers expressing its commitment to a discrimination-free environment. Several employee task forces were also established at NIH, such as the Task Force on Intramural Women Scientists and the Task Force on Fair Employment Practices. These groups, respectively, addressed issues such as differences in pay and status between male and female scientists with comparable backgrounds and experiences and improvements for processing reprisal complaints (the latter has been incorporated into NIH EEO policy). NIH officials recently conceded that pay discrepancies exist between male and female scientists, and they are acting to bring female scientists’ salaries in line with those of their male peers within their respective ICDs. An EEO hotline was operational from June 1993 through April 1994 to permit employees to call in and informally report EEO situations they were uncomfortable about. ICD officials were responsible for preparing reports about these inquiries. NIH management’s actions to better its EEO climate appear to have been positive ones. However, in light of the history of controversy surrounding EEO issues at NIH and the public focus of those issues on the office of the NIH Director, our review suggested additional steps that could be taken to further improve the environment and to provide information to the NIH Director to assist him in ensuring that the EEO climate continues to improve and problems are addressed as they emerge. NIH and HHS have been unsuccessful at meeting time frame requirements for processing sexual harassment and sex discrimination complaints filed by NIH employees. Federal regulations generally require that an agency provide the complainant with a completed investigative report within 180 days of accepting a formal complaint. Of the 119 formal sexual harassment and sex discrimination complaints filed between October 1, 1990, and March 31, 1994, 63 were still open as of April 30, 1995. All of these cases had been open for more than 1 year. Of the 56 cases that were closed by the end of April 1995, only 19 were closed within 180 days of the date the complaint was filed. Twenty-five of them were open for more than 1 year before being closed. (See fig. 2.) Responses to our questionnaire indicated that although about 32 percent of NIH employees said they experienced sexual harassment and approximately 13 percent said they believed they were discriminated against because of their sex, substantially fewer employees reported to NIH that they had experienced such situations. The limited reliability of complaint data in assessing the overall climate of an agency, along with the independent nature of the ICDs, makes it difficult for NIH management to assess the sexual harassment and sex discrimination environment. Agencywide information on how employees view these issues would aid management in making such an assessment; however, such information currently is not being collected. Through EEO training, attempts were made by NIH to educate employees about what actions or behaviors constitute sexual harassment and sex discrimination, how to prevent such situations, and what recourse employees have to deal with them. Many of the issues surrounding sexual harassment involve dealing with people, such as being sensitive to others in the workplace, being able to confront someone tactfully, treating people fairly, and maintaining a professional atmosphere. Some employees may actually be unaware that their actions are perceived by others as sexual harassment. Some employees may not realize that the actions of others are in fact sexual harassment and/or sex discrimination and that they do not have to tolerate these actions. Within NIH, the ICDs have been delegated the authority to develop and provide their own EEO training programs relating to preventing sexual harassment and sex discrimination. OEO has not monitored the quality, consistency, or frequency of the training provided to individual employees, nor has it provided agencywide criteria regarding the content of the courses provided or which employees should be required to attend. We contacted 10 of NIH’s 26 ICDs about their EEO training efforts. These ICDs employed over 9,200 people, or about 71 percent of NIH’s full-time permanent staff, and varied in size from 150 to over 2,000 employees. All 10 ICDs offered some form of sexual harassment prevention training. Six ICDs required all of their employees to receive such training, three ICDs required this training only for managers and supervisors, and one ICD had no attendance requirements. Most of the ICDs chose either to conduct their own training sessions or to have OEO conduct the training. In a few cases, the training was developed and/or presented by contractors. Five of the ICDs offered sexual harassment prevention training as recently as fiscal year 1994. However, one ICD last offered training in fiscal year 1991. The training sessions generally ranged from 2 to 4 hours. None of the ICDs reported offering training that specifically dealt with preventing sex discrimination. Any such training was to have been included with other training. As with the sexual harassment prevention training, the EEO training varied in length, recency (from fiscal year 1991 to fiscal year 1994), source of design, and target audience. Three of the 10 ICDs we contacted required their managers and supervisors to attend. Even though OEO did not provide standardized, scheduled training for NIH employees or maintain any data on the training provided to them by their respective ICDs, many employees considered themselves to be well informed about sexual harassment and sex discrimination. Most employees reported they believed that NIH did a somewhat good to very good job informing them about current policies and procedures prohibiting sexual harassment (85.9 percent) and behaviors or actions that constitute sexual harassment (80.0 percent). Similarly, a majority of employees also reported they believed that NIH did a somewhat good to very good job informing them about the penalties for those who engage in sexual harassment (63.1 percent). A large majority of employees reported they believed that NIH did a somewhat good to very good job informing them about current policies and procedures prohibiting sex discrimination (72.7 percent) and behaviors or actions that constitute sex discrimination (67.3 percent). However, about one out of four employees (24.9 percent) stated that NIH did a somewhat poor to very poor job of informing them about the penalties for those who engage in sex discrimination. Overall, 65.2 percent of NIH employees reported they believed NIH did a somewhat good to very good job informing them about their rights and responsibilities under federal government EEO regulations. They were less positive in their beliefs about how well NIH informed them about the roles of EEO officials, counselors, and investigators (51.9 percent good, 26.7 percent poor) and about the various complaint channels open to them (53.6 percent good, 26.2 percent poor). Employees also believed NIH did a somewhat better job of helping managers/supervisors develop an awareness of and skills in handling EEO problems (63.0 percent good, 20.9 percent poor) than it did for employees (53.2 percent good, 25.2 percent poor). At NIH, we found no agencywide record maintenance or tracking of problem areas or trends for situations handled at the ICD level. NIH management empowered the ICDs with responsibility for resolving situations in the hopes that their early resolution would prevent barriers from being created that would hinder productivity and/or cause employees to remain in hostile work environments for unnecessarily long periods of time. Regarding alleged sex discrimination, employees had the option of contacting the EEO officer in their respective ICDs to try to resolve their situations before filing a complaint with OEO. We found that ICD officials were not required to notify OEO officials of any recurring problems, behavioral patterns, or trends they identified when dealing with employees’ concerns about sex discrimination, thus depriving OEO officials and NIH employees of an overview of NIH’s EEO environment. While most NIH employees do not perceive sexual harassment and sex discrimination to be serious problems at NIH, and the number of those who believe progress has been made outweighs those who do not, a significant minority of NIH employees are still clearly concerned about the continuing existence of sexual harassment and sex discrimination at their agency. In order for NIH efforts against sexual harassment and sex discrimination to be successful, employees need to trust that the processes established for dealing with their concerns about sexual harassment and sex discrimination will produce results in a timely manner. To date, NIH and HHS have not met time frames established by federal regulations in handling many of the formal complaints filed by NIH employees. Because of the number of independent organizations operating under the NIH structure and the absence of reliable indicators on the extent to which sexual harassment and sex discrimination are occurring, we believe that looking at the agency “as a whole” could enable NIH to better determine the overall state of its sexual harassment and sex discrimination situations. Such an overall assessment would also provide agencywide information for the NIH Director to permit him to identify the existence of emerging EEO problems and to resolve them more expeditiously. For example, periodically using an NIH employee attitude questionnaire, such as the one we developed, would assist NIH in identifying problems that have occurred or acknowledging any progress that has been made in dealing with such situations. NIH has attempted to deal with employee concerns about sexual harassment and sex discrimination by increasing awareness about workplace relationships and improving agencywide communication through training. However, we noted that NIH lacks minimum standards with regard to course content and has not communicated its expectations on which employees should receive such training and on how frequently it should be provided. Moreover, NIH has not monitored training to ensure that its expectations regarding such training are being fulfilled. We recommend that the Secretary of HHS and the Director of NIH take steps to decrease the time it takes to process and resolve sexual harassment and sex discrimination complaints at NIH. In addition, because the Director is responsible for ensuring an appropriate EEO climate throughout NIH despite the decentralized management structure and practices of the agency, we also recommend that he take further steps to provide guidance for and monitoring of the agency’s EEO program. In doing so, we recommend he consider such steps as periodically conducting an employee attitude survey, such as the one we developed, so that the existence of sexual harassment and sex discrimination trends and problems can be more easily identified and dealt with; and establishing minimum standards for sexual harassment and sex discrimination-related training offered to NIH employees as well as procedures for monitoring the implementation of the training to ensure that employees participate as intended. We requested comments from the Secretary, HHS; the Assistant Secretary for Health, HHS; and the Director, NIH on a draft of this report. The Department responded with consolidated comments, which are presented in appendix V. The Department concurred with each of our recommendations and indicated that steps are under way to implement them. We believe that the steps outlined in the Department’s letter, if successfully implemented, will achieve the objective of our recommendations. As agreed with you, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will provide copies to the Secretary, Department of Health and Human Services; the Director, National Institutes of Health; and the Chairman and Ranking Minority Member of the Subcommittee on Civil Service, House Committee on Government Reform and Oversight. Copies will also be made available to others upon request. The major contributors to this report are listed in appendix VI. If you have any questions about the report, please call me on (202) 512-8676. Federal regulations (29 C.F.R. Part 1614) state that agencies should provide prompt, fair, and impartial processing of EEO complaints, including those related to sexual harassment and sex discrimination. The federal EEO complaint filing process consists of two phases, informal and formal. Figure I.1 details the process and the time frames stated in the regulations. Once an employee has exhausted all options available through this process, he/she can appeal to the EEOC and/or through the court system. An NIH employee who believes he/she has been sexually harassed or discriminated against because of his/her sex can seek advice or assistance from various sources before filing an informal complaint. A supervisor or other management official can initially become involved to assist in resolving the situation at an early stage, or the employee can go directly to the EEO officer at the ICD where he/she works. If the situation cannot be resolved, or if the employee chooses not to have ICD officials address the situation, an informal complaint can be filed with NIH’s OEO. An employee who believes he/she has been sexually harassed or discriminated against because of his/her sex has 45 days from the alleged event to file an informal complaint with the OEO. An OEO-appointed counselor is allotted 30 days to attempt to resolve the matter by contacting employees associated with the situation. If the situation is not resolved within 30 days from the start of counseling (and the involved parties have not agreed to an extension), the complainant is to be given a counselor’s inquiry report and notified of the right to file a formal complaint within 15 days with HHS’s Office of Human Relations. HHS has responsibility for deciding whether to accept a complaint, hiring investigators, determining whether sexual harassment or sex discrimination has occurred, and arranging settlements. An accepted formal complaint is investigated by an independent contractor. The agency has 180 days to complete the investigation and provide the complainant with a report. If the complainant is not satisfied with the results of the investigative report, he/she is given appeal rights and has 30 days (from receipt) to request a hearing from the EEOC or an agency decision from HHS. Congress has requested that the U.S. General Accounting Office (GAO), an independent agency of Congress, review the extent and type of sexual harassment and sex discrimination that may be happening at the National Institutes of Health (NIH). To do this, we are surveying a randomly selected sample of NIH employees. This questionnaire asks about your experiences at NIH and your opinions about NIH’s Equal Employment Opportunity (EEO) system, including the EEO complaint process. The responses of all NIH employees included in our sample are very important in order for us to accurately measure the occurrence of sexual harassment and sex discrimination at NIH. Because these are sensitive topics, the survey is anonymous. We cannot identify you from this questionnaire. If you have any questions, please call Ms. Jan Bogus at (202) 512-8557 or Ms. Annette Hartenstein at (202) 512-5724. With your help, we will be able to identify the problems that affect NIH employees and recommend solutions. The results will be presented in summary form. Any discussion of individual answers will not contain information that can identify you. Thank you for your help. To ensure your privacy, please return the postcard separately from the questionnaire. This will let us know that you completed your questionnaire. This section asks about sexual harassment. Sexual harassment involves uninvited, unwanted sexual advances, requests for sexual favors, and other comments, physical contacts, or gestures of a sexual nature. Such actions may negatively affect one’s career and may create an intimidating, hostile, or offensive environment. 1. As far as you are aware, is sexual harassment currently a problem at NIH and at your institute, center, or division? (Check one box in each row.) (1) (2) (3) (4) (5) (6) (N=4,161) b. At your institute, center, or (N=1,477) Note 1: All “Ns” (number in the population) are estimates based on appropriately weighting the sample results. Note 2: For questions in the matrix format, all percentages are based on those who chose a response other than “No basis to judge.” Note 3: For questions in the matrix format, the “Ns” to the left of the first percentage represent the estimated size of the population who responded with a basis to judge. The “Ns” to the right of the last percentage represent the estimated size of the population who responded with “No basis to judge.” The objective of our questionnaire survey was to obtain information on the extent and type of sexual harassment and sex discrimination that may be happening at the National Institutes of Health (NIH). Using mail questionnaires, we asked about the general climate at NIH regarding sexual harassment and sex discrimination and specifically about the occurrence of behaviors at NIH that respondents considered to be instances of sexual harassment and about the occurrence of situations at NIH that respondents considered to be instances of sex discrimination. For those who indicated that they believed sexual harassment was directed toward them, we inquired about what the respondent did to deal with the situation. We asked a set of similar questions to see how individuals dealt with sex discrimination when it affected them. We also asked for respondents’ views on NIH’s equal employment opportunity (EEO) system and asked some general questions about the respondents’ work setting and background. Due to the sensitive nature of the information we required, the questionnaire was anonymous. It did not contain any information that could identify an individual respondent. A postcard containing an identification number was included in the package sent to NIH employees. The postcard was to be mailed back to GAO separately from the questionnaire. Receipt of the postcard allowed us to remove names from our mailing list. The questionnaire was first mailed in early January 1994. In late February, we sent out a follow-up mailing, which contained another questionnaire to those in our sample who did not respond to our first mailing. In mid-April, we sent a letter to those who still had not yet responded, urging them to take part in the survey. The questionnaire was designed by a social science survey specialist in conjunction with GAO evaluators who were knowledgeable about the subject matter. We pretested the questionnaire with 15 NIH employees from a number of occupational categories before mailing to help ensure that our questions were interpreted correctly and that the respondents were willing to provide the information required. After the questionnaires were received from survey respondents, they were edited and then sent to be keypunched. All data were double keyed and verified during data entry. The computer program used in the analysis also contained consistency checks. Our study population represents the approximately 13,000 white-collar employees at NIH and excludes staff fellows and contract employees. Since NIH is composed of 26 institutes, centers, and divisions (ICD), we wanted the results of our survey to provide specific estimates for the 5 largest ICDs and a general estimate for the remaining 21 ICDs. In addition, we wanted to look specifically at the experiences of male and female employees in the five largest ICDs and in the other ICDs as a whole. We asked NIH to provide us with a computer file containing the names and home addresses of all NIH employees. From this list, we deleted staff fellows and “blue collar” employees. We used standard statistical techniques to select a stratified random sample from this universe of names. The sample contained 4,110 employees of the universe of 13,473 employees. Table III.1 presents the universe and sample sizes for each stratum. Because this survey selected a portion of the universe for review, the results obtained are subject to some uncertainty or sampling error. The sampling error consists of two parts: confidence level and range. The confidence level indicates the degree of confidence that can be placed in the estimates derived from the sample. The range is the upper and lower limit between which the actual universe estimate may be found. For example, if all female employees of the Clinical Center had been surveyed, the chances are 19 out of 20 that the results obtained would not differ from our sample estimates by more than 5 percent. Not all NIH employees who were sent questionnaires returned them. Of the 4,110 NIH employees who were sent questionnaires, 2,642 returned usable ones to us, an overall usable response rate of 64.3 percent. Table III.2 summarizes the questionnaire returns for the 4,110 questionnaires mailed. The usable response rates for the individual stratum range from 49.5 to 77 percent. Table III.3 presents the response rates for each stratum. Given our overall response rate of 64.3 percent, we wanted to get some indication that the 35.7 percent of our sample that did not respond to our survey were generally similar in their experiences regarding sexual harassment and sex discrimination to those who did respond to the survey. To find this out, in June 1994 we conducted a small-scale, nonstatistical telephone survey of 41 NIH employees who were in our sample but did not respond to the questionnaire. We asked these individuals two questions that were included in the questionnaire. The first was the extent to which they believed sexual harassment was a problem at NIH as a whole and at their ICD. The second was a similar question regarding sex discrimination. Although these 41 employees perceived less sexual harassment and sex discrimination than did the 2,642 employees that responded earlier, the differences in their perceptions were not statistically significant. We decided to not modify the main survey results on the basis of the 41 telephone respondents’ views because the telephone respondents did not form a statistically representative sample and the observed differences were not statistically significant. The 2,642 usable returned questionnaires have been weighted to represent the study population of 13,473 white-collar employees at NIH (excluding staff fellows and contract employees). The weighted total population size for the sample was slightly different (13,460) due to rounding errors introduced in the sample weighting process. Because we sampled a portion of NIH employees, our survey results are estimates of all employees’ views and are subject to sampling error. For example, the estimate that 32 percent of the employees have experienced sexual harassment is surrounded by a 95 percent confidence interval of + 2 percent. This confidence interval thus indicates that there is about a 95-percent chance that the actual percentage falls between 30 and 34 percent. All of the survey results in this report have 95 percent confidence intervals of less than + 5 percent unless otherwise noted. In addition to the reported sampling errors, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, in the sources of information that are available to respondents, or in the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in the development of the questionnaire, the data collection, and data analysis for minimizing such nonsampling errors. These steps have been mentioned in various sections of this appendix. There are many different levels at which an EEO situation can be handled before and during the actual EEO complaint process. Employees can involve supervisors and/or other management officials; institute, center, or division (ICD) EEO officers; and others in the pursuit of resolution before filing informal complaint paperwork with NIH’s Office of Equal Opportunity (OEO). Department of Health and Human Services (HHS) officials estimated the cost of processing an informal complaint in NIH’s OEO during fiscal year 1994 to be about $860. If the complaint is not resolved and the employee chooses to file a formal complaint with HHS, an additional $8,700 in costs could be borne by HHS’ Office of Human Relations and NIH’s OEO. This includes the cost of an investigation, which HHS contracts out to an investigative firm. The procedures for handling sexual harassment complaints differ from those established for handling other types of EEO complaints. In order to speed up the process, an investigation is contracted for when an informal complaint has been filed. This shifts the costs for the investigation from the formal to the informal stage. An HHS official said that under this process, total costs (informal and formal) can range from $10,225 to $11,825. Our work did not include an analysis of the difference in cost between the two approaches. It should be noted that these cost estimates cannot be applied to all cases. Each case is unique—a complaint can be resolved at any step in the process or it may involve others outside of the normal EEO process. Also, none of these estimates include costs accrued at the ICD level, lost work time, settlement costs, complaints pursued through processes other than EEO (i.e., grievances), and costs that go beyond the formal complaint stage. NIH attorneys can become involved if the employee chooses NIH’s alternative dispute resolution process before filing an informal complaint. However, the employee can later file an informal complaint if he/she is not satisfied with the outcome. NIH attorneys are also involved in EEO complaints that are appealed to the Equal Employment Opportunity Commission’s (EEOC) Office of Federal Operations if the complainant is not satisfied with the outcome of the formal complaint stage. HHS attorneys and Justice Department officials defend NIH if the complainant decides to appeal the case beyond the EEOC to the court system. Norman A. Stubenhofer, Assistant Director, Federal Management and Workforce Issues Jan E. Bogus, Evaluator-in-Charge Annette A. Hartenstein, Evaluator Michael H. Little, Communications Analyst James A. Bell, Assistant Director, Design, Methodology, and Technical Assistance Group James M. Fields, Senior Social Science Analyst Stuart M. Kaufman, Senior Social Science Analyst Gregory H. Wilmoth, Senior Social Science Analyst George H. Quinn, Jr., Computer Programmer Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined the extent and nature of sexual harassment and sex discrimination at the National Institutes of Health (NIH). GAO found that: (1) 32 percent of NIH employees surveyed reported experiencing some form of sexual harassment in the past year, but 96 percent of these employees opted not to file an equal employment opportunity (EEO) complaint or take other personnel action; (2) NIH employees filed 32 informal and 20 formal sexual harassment complaints between October 1990 and May 1994, however no determinations of sexual harassment were made in response to these complaints; (3) about 13 percent of NIH employees believed they had experienced sex discrimination over the last 2 years, but 90 percent of these employees chose not to file grievances or EEO complaints; (4) NIH employees filed 209 informal and 111 formal sex discrimination complaints between October 1990 and May 1994, however no determinations of sex discriminations were made in response to the formal complaints; and (5) although NIH has recently acted to improve its EEO climate, more could be done in the areas of timeliness, information, and training.
You are an expert at summarizing long articles. Proceed to summarize the following text: Illegal immigration has long been an important issue in California, which historically has been estimated to be the state of residence for nearly half of this country’s illegal aliens. Illegal aliens are a concern not only because they are breaking immigration laws, but also because their presence affects a wide range of issues of public concern. These issues include the government costs of providing benefits and services to illegal aliens and the impact illegal aliens’ presence has on the employment of U.S. workers. In an effort to reduce the size of the nation’s illegal alien population, estimated at 3 million to 5 million in 1986, the Congress enacted the Immigration Reform and Control Act of 1986 (IRCA). IRCA reduced the size of the illegal alien population by granting legal status to certain aliens already in the country and attempted to deter the inflow of illegal aliens by prohibiting employers from hiring any alien not authorized to work. Despite a brief drop in illegal entries to the United States after IRCA was enacted, the size of the illegal alien population is now estimated to have exceeded the lower bound of the pre-IRCA estimate. INS and the Bureau of the Census estimated the population of illegal aliens ranged from 3.4 million to 3.8 million in 1992. At the same time, governments at all levels began experiencing fiscal crises that heightened public concerns about the costs of providing benefits and services to illegal aliens. Illegal aliens are not eligible for most federal benefit programs, including Supplemental Security Income, Aid to Families With Dependent Children (AFDC), food stamps, unemployment compensation, and financial assistance for higher education. However, they may receive certain benefits that do not require legal immigration status as a condition of eligibility, such as Head Start and the Special Supplemental Food Program for Women, Infants, and Children. Furthermore, illegal aliens may apply for AFDC and food stamps on behalf of their U.S. citizen children. Though it is the child and not the parent in such cases who qualifies for the programs, benefits help support the child’s family. Education, health care, and criminal justice are the major areas in which state and local governments incur costs for illegal aliens. Regarding education, the U.S. Supreme Court has held that states are prohibited from denying equal access to public elementary and secondary schools to illegal alien children. State and local governments bear over 90 percent of the cost of elementary and secondary education. To provide for certain medical services, the Congress in 1986 revised the Social Security Act to stipulate that illegal aliens are eligible for emergency services, including childbirth, under the Medicaid program. The federal government and the state of California each pay 50 percent of the cost of these benefits for illegal aliens in California. In California and New York, illegal aliens are also eligible to receive Medicaid prenatal services. States also incur costs for incarcerating illegal alien felons in state prisons and supervising those released on parole. Section 501 of IRCA authorizes the Attorney General to reimburse states for the cost of incarcerating illegal aliens convicted of state felonies. Illegal aliens generate revenues as well as costs; these revenues offset some of the costs that governments incur. Research studies indicate that illegal aliens do pay taxes, including federal and state income taxes, Social Security taxes, and sales, gasoline, and property taxes. Researchers disagree on the amount of the revenues illegal aliens generate and the extent to which these revenues offset government costs for benefits and services. However, they agree that the fiscal burden for aliens overall, including illegal aliens, falls most heavily on state and, especially, on local governments and that the federal government receives a large share of the taxes paid by aliens. To examine the costs of elementary and secondary education, Medicaid, and adult incarceration associated with illegal aliens residing in California, we evaluated the reasonableness of the assumptions and methodologies underlying the cost estimates published by the state of California in its January and September 1994 studies and the Urban Institute in its Fiscal Impacts study. We also reviewed the revenue estimates for illegal aliens contained in California’s September study and the Fiscal Impacts study. (California’s January 1994 study did not include revenue estimates.) The California study included estimates for 13 types of federal, state, and local revenues; the Fiscal Impacts study’s estimates were limited to 3 types of revenues. With assistance from Urban Institute researchers, we used the Fiscal Impacts study and another study published by the Urban Institute to extrapolate estimates for the remaining 10 types of revenues. This enabled us to compare the revenue estimates in the California and Fiscal Impacts studies. (See app. I for a detailed discussion of the methodology we used to develop these additional revenue estimates.) We convened a panel of experts in May 1994 to obtain their opinions regarding the reasonableness of California’s January 1994 estimates and the underlying methodologies, and interviewed state officials and private researchers. (See app. II for a list of the researchers we consulted.) In conjunction with related work we have done for several congressional requesters on the national fiscal impact of illegal aliens, we also examined the relevant research on the costs and revenues—at all levels of government—associated with illegal aliens. Some of the issues raised in these studies were relevant to our review, and we have incorporated them in our analysis. Assessing California’s cost estimates was complicated by the fact that the state’s estimates are for California fiscal year 1994-95. That is, the estimates are projections of future costs and are only valid to the extent that the growth trends assumed in the projections hold true. We did not assess the validity of the growth trends. In addition, we did not independently verify California’s administrative data for Medicaid and incarceration because we had no reason to believe that the data on expenditures and number of recipients in these programs presented any special concerns about reliability. We did our work between April and September 1994 in accordance with generally accepted government auditing standards. As of September 1994, California estimated that it will spend $2.35 billion on elementary and secondary education, Medicaid, and adult incarceration for illegal aliens in fiscal year 1994-95. California officials believe that these three programs represent the state’s highest costs for illegal aliens. This estimate is $80 million lower than California’s January 1994 estimate primarily because the education estimate was reduced. In the September estimate, California reduced its projections of the numbers of illegal aliens who will receive education or Medicaid services, or be incarcerated in state prisons. At the same time, however, this new estimate added in administrative costs not previously included and for education and adult incarceration, added capital costs. The net effect of these adjustments is shown in table 1. The Urban Institute’s Fiscal Impacts study estimated costs lower than California’s estimates for all three programs (see table 1). This is in part because the Fiscal Impacts study estimated costs for earlier years—the education estimate was for the 1993-94 school year; Medicaid, for fiscal year 1992-93; and adult incarceration, for 1994. Other reasons for the lower estimates in the Fiscal Impacts study varied by program, as described in the following sections. The cost estimates in the California and Fiscal Impacts studies are questionable because of the limited direct data available on illegal aliens and certain assumptions made by the studies. For example, estimates of the cost of education—the single largest cost associated with illegal aliens—are based entirely on assumptions about the size and characteristics of the illegal alien population. However, by combining selected data and assumptions from both California’s September 1994 estimates and the Fiscal Impacts study, we developed adjusted estimates for education and adult incarceration that we believe are more reasonable than either study’s original estimates. We did not adjust the state’s Medicaid estimate because the necessary data are not currently available. It is important to note that none of the estimates of education or incarceration costs represents the amount that would actually be saved if California did not educate or incarcerate illegal aliens. This is because the estimates are based on mean costs: total cost divided by total number of users. Mean costs include both variable costs, which are affected by the number of individuals using the service, and fixed costs—such as certain administrative costs—which are not. The amount that would be saved if illegal aliens did not receive these services could either be less than the mean costs or greater (for example, if new schools would otherwise have to be built). The state of California now estimates that it will spend $1.5 billion to educate illegal alien children in fiscal year 1994-95. The Fiscal Impacts study estimated California’s education costs at $1.3 billion for school year 1993-94. The Fiscal Impacts estimate was lower not only because it covered an earlier year, but also because the study relied on a different data source to develop its per pupil cost figure. Selecting the components of each estimate that we believe are more reasonable, we adjusted California’s fiscal year 1994-95 estimate upward to $1.6 billion. The education cost estimates were derived by multiplying estimates of the following components: (1) the size of the state’s illegal alien population, (2) the percentage of this population that is of school age, (3) the percentage of school-aged illegal aliens enrolled in school, (4) the percentage of school days actually attended, and (5) the statewide average cost per pupil. The studies used an indirect method to estimate the number of illegal alien children in school because school districts do not collect information on the immigration status of students. According to California state officials, many school districts believe the U.S. Supreme Court decision, Plyler v. Doe, prohibits them from asking about immigration status. To develop each of the cost components, the state of California and Urban Institute researchers relied on research studies and published estimates. For their estimates of the illegal alien population, California’s September 1994 study and the Fiscal Impacts study used recently revised INS population estimates; the small difference between the two estimates can be explained by the different years being estimated (see table 2). For the adjusted estimate, we used California’s September estimate of 1.7 million illegal aliens because it is for the same time period (fiscal year 1994-95). The state had previously estimated its illegal alien population at 2.3 million—a figure that was probably too high. The basis of California’s January 1994 population figure was a 1993 Census Bureau estimate of 2.1 million illegal aliens in California; the state assumed this population would grow by 100,000 each year. This assumption was based on the Census Bureau estimate that the illegal alien population is growing nationally by 200,000 each year and that about 50 percent of illegal aliens live in California. However, researchers at the Census Bureau and INS have recently estimated that the percentage living in California may be lower, ranging from about 38 to 45 percent. Moreover, INS estimates that the size of the illegal alien population is smaller, but growing more rapidly. California’s September 1994 study and the Fiscal Impacts study both relied on an indirect method to estimate the percentage of the illegal alien population that is of school age and the percentage of school-aged illegal aliens enrolled in school. The method involves constructing a proxy population based on INS estimates of the breakdown of the illegal alien population by country of origin. The proxy population consists of people who entered the United States from countries that contribute most of the illegal alien population. The education cost estimates in the California and Fiscal Impacts studies are based on 1990 Census data on the age distribution and school enrollment of the studies’ proxy populations. However, the studies differed in their assumptions about the appropriate age range to include—the Fiscal Impacts study included illegal aliens aged 5 to 19, while California included those aged 5 to 17 in its estimate. This difference resulted in the Fiscal Impacts study estimating a higher percentage of school-aged illegal aliens, but a lower percentage enrolled in school, to adjust for the likelihood that fewer 18- and 19-year-olds attend high school (see table 2). For the adjusted estimate, we used the Fiscal Impacts study’s assumptions for these two components of the cost estimate because data indicate some 18- and 19-year-olds do attend high school. California’s September 1994 estimate included a component that adjusted its enrollment estimate, which was based on fall enrollment, for the percentage of school days actually attended (“average daily attendance”). This adjustment was necessary because California’s average cost per pupil is based on average daily attendance, not fall enrollment. This adjustment was not needed in the Fiscal Impacts study because its estimate of per pupil cost was based on fall enrollment. Our adjusted estimate used California’s figure for the percentage of school days actually attended (98.2) because it also used California’s figure for average cost per pupil with some adjustments (as explained in the following paragraphs). The per pupil cost figure California included in its September 1994 estimate was considerably higher than that used in its January 1994 estimate—$4,977 compared with $4,217—even though both estimates were for fiscal year 1994-95. Both figures were derived from a statewide average based on state and local public school expenditures. However, state officials told us that their September estimate included additional funding sources that are used to pay education costs, as well as some additional costs (for example, debt service costs on bonds for school facilities and certain administrative costs). The Fiscal Impacts study, in contrast, used state-specific data on current expenditures from the National Center for Education Statistics (NCES). The study used these data to develop standardized cost estimates for the seven states included in the study. However, while the NCES data are one possible source of education cost data, there is no agreed-upon standard on the expenditures that should be included in calculating per pupil costs, according to the authors of the Fiscal Impacts study and budget and education experts we spoke with. Using the NCES data produced a lower estimate of California’s per pupil costs ($4,199) because the data do not include the range of funding sources used in the state’s cost estimate, nor do they include capital costs such as debt service on bonds. For the adjusted estimate, we used California’s September 1994 per pupil cost figure but subtracted two questionable cost items to yield an adjusted figure of $4,830. The state had included $78 per pupil for adult education costs; state officials acknowledged that this amount should not have been included. In addition, we subtracted the interest portion of the debt service cost—$69 per pupil. Experts disagree about how to treat debt service in calculating per pupil expenditures; however, we identified OMB cost principles that may provide a standard for treating such capital costs. These cost principles establish standards for determining the allowable costs of federal grants, contracts, and other agreements administered by state and local governments. The OMB cost principles specify that depreciation is an allowable cost, but interest payments are not. Experts we spoke with suggested that statewide average cost data may not be the best measure of the costs of providing illegal alien children with a public education. They suggested that researchers should instead use estimates based on the costs incurred by districts where illegal aliens are believed to be most heavily concentrated, such as Los Angeles County. However, the Fiscal Impacts study reported, and state officials concurred, that the necessary data are not available. State officials said they did not believe more localized cost data would result in estimates significantly higher or lower than estimates based on the statewide average. On the basis of congressional action in 1986, illegal aliens are eligible for emergency Medicaid services only. In addition, some legal aliens are eligible for emergency services only. These include foreign students, temporary visitors, and aliens granted temporary protected status. California has estimated that it will spend $395 million for Medicaid benefits provided to illegal aliens during fiscal year 1994-95. The Fiscal Impacts study, while questioning the accuracy of California’s estimate, did not develop an alternative estimate because data were not available to do so. Instead, it developed a “benchmark” cost range for purposes of comparison. However, it is questionable whether this benchmark provides a good basis for comparison. We made no adjustments to the state of California’s Medicaid estimate because the data needed to correct for elements that lead to possible over- or understatement of costs are not currently available. The state’s estimate was based on administrative cost data for services provided to all individuals eligible for emergency Medicaid services only, not just illegal aliens. California’s estimate may thus include some legal aliens because, at the time this estimate was developed, agency officials were legally prevented from inquiring about the immigration status of people who applied for emergency Medicaid benefits. California state officials do not have data on the extent to which legal aliens may be receiving these limited benefits. California officials told us that their cost estimate does not include all the illegal aliens they are serving under the Medicaid program. They said it does not include costs for illegal aliens who (1) are tracked in other eligibility categories, such as those for pregnant women and children, or (2) provide fraudulent documents to get full Medicaid benefits. However, state officials noted that they do not have data on the costs of Medicaid services provided to these illegal aliens. The Fiscal Impacts study used Medicaid data on formerly illegal aliens who were granted legal status under IRCA as a “benchmark” against which to assess the estimates of the seven states included in the study. The legalized alien population has many of the same characteristics as the current illegal alien population and, therefore, provides a useful basis for comparison, according to this study. The estimated range that the Fiscal Impacts study used to assess California’s Medicaid estimate—$113 million to $167 million—was considerably lower than the state’s estimate for people receiving emergency services only (see table 3). Some of the difference between California’s Medicaid estimates and those in the Fiscal Impacts study may be due to California’s inclusion of certain legal aliens in its estimate. However, differences between legalized and illegal aliens’ use of Medicaid may also explain why California’s estimate was higher. For example, the Fiscal Impacts study acknowledged that illegal aliens may be more likely than legalized aliens to use emergency Medicaid services because they know their immigration status will not be questioned. In addition, California’s administrative data indicate that illegal aliens have somewhat higher average Medicaid expenditures than aliens who were granted legal status under IRCA. Furthermore, differences in demographic characteristics of the two populations suggest that they may differ in their ability to qualify for Medicaid. In sum, these considerations raise doubt about whether the Fiscal Impacts study’s benchmark cost range was based on a comparable population. California state officials’ inability to ask about immigration status has, they believe, hindered their ability to fully account for all illegal aliens receiving Medicaid. The state court injunction that prohibited officials from asking applicants for emergency Medicaid benefits about their immigration status was initially overturned by the California Court of Appeal. However, the injunction is currently in effect pending a decision from the California Supreme Court. State officials told us they believe that if the injunction is ultimately lifted, it would enable them to collect more accurate data on the number of illegal aliens receiving emergency Medicaid services. The state of California estimated that it will spend nearly $424 million in fiscal year 1994-95 to incarcerate illegal aliens in its prisons. In contrast, the Fiscal Impacts study estimated California’s adult incarceration costs for 1994 at about $368 million. The state’s estimate was higher primarily for two reasons—state officials estimated a higher illegal alien prison population and included debt service costs on bonds for prison facilities. We adjusted California’s estimate downward to $360 million based on what we believe are the more reasonable of the assumptions used to develop the estimates (see table 4). The Fiscal Impacts study’s estimate of the number of illegal aliens in California’s prisons is more reliable than the state’s because the study directly estimated the number of illegal aliens. INS officials assisted in this study by matching prison records against several INS databases to determine prisoners’ immigration status and by conducting follow-up interviews with a sample of prisoners whose status could not be determined through the INS database matches alone. These data on prisoners’ immigration status were developed specifically for the Fiscal Impacts study and were not available to the state of California as it prepared its estimate. The state’s estimate was overstated because it was based on the number of inmates with INS detainers. This category, which refers to inmates who are subject to an INS hearing and possible deportation at the completion of their prison sentences, also includes legal aliens who are deportable because of the nature of the crimes they committed. The Fiscal Impacts study concluded that the state’s estimate of California’s adult illegal alien prison population was overstated by about 10 percent. We therefore adjusted the state’s population estimate downward by 10 percent to reflect this new information. As with their education cost estimates, the state and the Fiscal Impacts study used different data sources to estimate the average cost per inmate. The Fiscal Impacts study relied on data from the 1990 Census of State Prisons and adjusted for inflation using the Consumer Price Index. The study used this data source because it provided a uniform basis for comparing the seven states’ estimates. However, the Census of State Prisons cost data, like the NCES education cost data the Fiscal Impacts study used, do not represent an agreed-upon standard for calculating the cost per inmate. Using the Census of State Prisons data and adjusting for inflation resulted in a higher estimate of per inmate cost than using the cost data from California’s Department of Corrections, as shown in table 4. For the adjusted estimate, we used the state’s September estimate of per inmate cost because it was based on more recent data than the Census of State Prisons. California’s revised adult incarceration cost estimate is nearly 13 percent higher than its previous estimate of about $376 million for fiscal year 1994-95 (see table 4). While the state slightly lowered its estimates of the illegal alien prison population and the per inmate cost, it added a new cost item—$51 million for debt service on bonds for prison facilities. As with the state’s education estimate, we subtracted the interest portion of this amount—$27 million—based on OMB cost principles for treating capital costs (see p. 11). As with the cost estimates, estimating the tax revenues collected from illegal aliens is difficult because of the lack of direct data on this population. Researchers must rely on indirect estimation methods that make numerous assumptions about this population. These include assumptions about income, life styles, consumption patterns, tax compliance, and population size. Differences in assumptions about these variables can generate considerable variation in estimates of revenues from illegal aliens. The September 1994 study by the state of California and the Fiscal Impacts study each developed estimates of revenues from illegal aliens in California. However, variations in the years of the estimates and the types of revenues estimated complicate comparison of the studies. To facilitate comparison, we used the Fiscal Impacts study and another study by an Urban Institute researcher to extrapolate estimates of selected revenues not included in the Fiscal Impacts study. We found that although the extrapolated revenue estimates fell within the range estimated by California, the estimates still varied considerably. This variation reflects differences in the studies’ methodologies and assumptions. The California study based its estimates on projections from studies that estimated revenues from illegal aliens in various locations: (1) Los Angeles County, (2) California, (3) Texas, and (4) the United States. The Fiscal Impacts study used revenue estimates from a single study the researchers regarded as the best available (a study of Los Angeles County) and adjusted these estimates to project them to the state of California. The limited data available to support the assumptions of the California study and the Fiscal Impacts study precluded us from drawing a conclusion about which, if either, of these studies provides a reasonable estimate of revenues from illegal aliens in California. The January 1994 cost estimates from California did not include estimates of any revenues from illegal aliens in California; hence, they provided an incomplete picture of the fiscal impact of this population. In contrast, the September 1994 California study included an estimate of eight types of state and local revenues for fiscal year 1994-95. The study provided an estimate ranging from a low of $528 million to a high of $1.4 billion, with a median estimate of $878 million. This estimate was based on projections by the state of several studies on the fiscal impact of illegal aliens in different geographical areas. The high estimate incorporated parameters from these studies that, according to the state, most magnify the contributions of illegal aliens; the low estimate incorporated parameters that most deflate their contributions. The Fiscal Impacts study estimated that illegal aliens in California paid $732 million in 1992 in three types of taxes: state income taxes, state sales taxes, and state and local property taxes. However, the Fiscal Impacts study did not develop estimates of the five other types of state and local revenues included in the state’s study. To compare the two sets of estimates, we developed estimates of these five types of revenues using the methodology from the Fiscal Impacts study and a national study by an Urban Institute researcher. (App. I describes our methodology.) Adding our extrapolated estimate for these five types of revenues to the $732 million estimate for the three types of revenues produced a total state and local tax revenue estimate of $1.1 billion for 1992. The California study and the Fiscal Impacts study reflect differing views about the magnitude of revenues generated by illegal aliens in California. If the estimate extrapolated from Urban Institute studies were updated to fiscal year 1994-95, it would probably be at the high end of the range estimated by California. In contrast, the California study maintained that its median estimate of state revenues probably overstated revenues and should be treated as an upper bound. (In California’s study, state revenues constituted over 75 percent of total estimated revenues from state and local sources.) The September 1994 California study included an estimate for fiscal year 1994-95 of five types of federal revenues from illegal aliens in California.The study provided an estimate ranging from a low of $542 million to a high of $2 billion, with a median estimate of $1.3 billion. The Fiscal Impacts study did not estimate any federal revenues from illegal aliens in California. However, we used the study’s revenue estimation assumptions for California, along with a national study by an Urban Institute researcher, to extrapolate estimates of the five types of federal revenues estimated by California. (App. I describes our methodology.) This produced a federal revenue estimate of $1.3 billion for 1992. If this estimate were updated to fiscal year 1994-95, it would probably be between the California study’s median and high estimates. However, the California study maintained that both the high and median estimates probably overstated the amount of federal revenues generated by illegal aliens in California. As a result, there is no agreement about the magnitude of federal revenues generated by this population. California’s September 1994 study estimated not only individual costs and revenues but also the state’s net cost (costs minus revenues) for illegal aliens. In contrast, the Fiscal Impacts study did not estimate net costs for illegal aliens in California because it examined only selected costs and revenues. We identified one other study that attempted to provide a comprehensive accounting of the costs and revenues for illegal aliens in California. This study, by Donald Huddle, included an estimate of the net cost for this population in 1992. However, for several reasons, we were unable to draw any conclusion about California’s net cost for illegal aliens. In the case of the California study, we were unable to assess the reasonableness of its net cost estimate because data limitations precluded us from assessing California’s revenue estimates. With regard to the study by Huddle, we could not extract an estimate of the net cost to the state of California because the study’s cost estimates did not provide a breakdown of federal, state, and local costs. Consequently, we were unable to compare the study’s estimates with those in California’s study. Recognizing the problems associated with estimating the fiscal impact of illegal aliens, OMB and the Department of Justice requested the Fiscal Impacts study to help the federal government assess states’ requests for reimbursement of illegal alien costs. The study represents an initial effort to standardize and improve states’ methodologies for estimating selected costs and revenues. However, because the study was released recently, it is too early to know whether, and to what extent, California and the other six states in the study will agree with and accept the study’s efforts to standardize and improve the states’ methodologies. OMB officials have not yet indicated how they will use the study in assessing states’ requests for federal reimbursement of illegal alien costs. One other federal effort is under way to improve estimates of illegal aliens’ fiscal impact. The U.S. Commission on Immigration Reform is engaged in a long-term project that includes an effort to develop better estimates of the fiscal impact of legal and illegal aliens. This bipartisan congressional commission, created by the Immigration Act of 1990, is working on a report to the Congress on a wide range of immigration issues. The final report is due in 1997; the Commission provided an interim report to the Congress in September 1994. As part of its study, the Commission has convened a task force of independent experts to review some of the estimates of aliens’ fiscal impact and develop a better understanding of how to measure this impact. Our review of estimates of the fiscal impact of illegal aliens shows that the credibility of such estimates is likely to be a persistent issue, given the limited data available on this population and differences in key assumptions and methodologies used to develop the estimates. For example, the studies we examined differed in their treatment of capital costs, the age groups they used to estimate education costs, and their methodologies for estimating revenues. While it probably will be difficult to obtain better data on the illegal alien population, greater agreement about appropriate assumptions and methodologies could help narrow the range of estimated costs and revenues. We believe state and federal officials need to reach consensus on the approaches that should be used in developing estimates of illegal aliens’ net fiscal impact. This consensus would not necessarily produce estimates that are completely accurate, but at least it would produce estimates viewed as reasonable, given the limited data available. Instead of being confronted with an array of competing estimates, lawmakers would have information that would be more useful in assessing illegal aliens’ fiscal impact. We obtained written comments on a draft of this report from California state officials and the Urban Institute researchers who authored the Fiscal Impacts study. While California officials found no factual errors in the report, they argued that the report overstates data problems associated with estimates of costs for illegal aliens. They also maintained that the different studies’ cost estimates were essentially identical. However, we found that the estimates did vary; moreover, most were based on indirect methods whose reliability is unknown. As noted in this report, we identified a number of problems with the cost estimates for education, Medicaid, and incarceration. California officials also provided comments on the Medicaid section that we incorporated where appropriate. (See app. III.) Urban Institute researchers agreed with our assessment of the different estimates and their relative strengths and weaknesses. The researchers also provided technical comments that we incorporated where appropriate. (See app. IV.) As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days from the date of this letter. At that time, we will send copies to interested parties and make copies available to others upon request. If you or your staff have any questions concerning this report, please call me on (202) 512-7215. Other GAO contacts and staff acknowledgments are listed in appendix V. This appendix describes the methodology we used to extrapolate estimates of selected tax revenues from illegal aliens in California from two studies by Urban Institute researchers. The most recent, Fiscal Impacts of Undocumented Aliens: Selected Estimates for Seven States (the Fiscal Impacts study), estimated three types of state and local revenues from illegal aliens in California and other states (state income tax, state sales tax, and state and local property tax) for 1992. An earlier study, Immigrants and Taxes: A Reappraisal of Huddle’s “The Cost of Immigrants” (the Immigrants and Taxes study), estimated 13 types of federal, state, and local revenues from illegal aliens in the United States for 1992. We used these studies to develop estimates of five types of state and local revenues (state excise tax, state lottery revenue, local sales tax, state vehicle license and registration fees, and state gasoline tax) and five types of federal revenues (income tax, excise tax, Federal Insurance Contributions Act [FICA] tax, unemployment insurance tax, and gasoline tax) from illegal aliens in California in 1992. The first section summarizes the methodology used by the two studies to estimate revenues from illegal aliens. The second section describes how we used this methodology to extrapolate estimates of state and local revenues. The third section describes how we extrapolated estimates of federal revenues. Both studies by Urban Institute researchers employed a methodology called “ratio generalization,” which takes detailed revenue estimates for illegal aliens in one locality and generalizes them to other areas. The studies used estimates of taxes paid per capita and per household by illegal aliens in Los Angeles County in 1992. They used two factors to adjust for differences between Los Angeles County and the geographic areas they were concerned with (California in the Fiscal Impacts study and the United States in the Immigrants and Taxes study). For each of the five types of state and local revenues we estimated, we began with the estimate of the per capita tax payment by illegal aliens in Los Angeles County in 1992. We then took the values used in the Fiscal Impacts study for ratios 1 and 3, as well as the size of California’s illegal alien population. We used several sources to obtain values for ratio 2, the ratio of per capita tax payments for legal residents in California to Los Angeles County. We took the values cited in the Immigrants and Taxes study for the per capita tax payments for legal residents in Los Angeles County. To estimate per capita tax payments for legal residents in California, we used Census Bureau data on revenue collected from California residents for each of the five types of revenues and divided these amounts by the size of California’s population. For each of the five types of federal revenues we estimated, we began with the estimate of the per capita tax payment by illegal aliens in Los Angeles County in 1992. We then took the values used in the Fiscal Impacts study for ratios 1 and 3, as well as the size of California’s illegal alien population. In estimating ratio 2, the ratio of per capita tax payments for legal residents in California to Los Angeles County, we were able to obtain data on per capita taxes by state for only one of the five types of federal revenue—income tax. We used Census Bureau data on per capita federal income tax collected from California residents to estimate per capita income tax payments for legal residents in California. For our estimates of California per capita payments for the other four types of federal revenues, we used the United States average per capita tax payment figures cited in the Immigrants and Taxes study. As before, we took the values cited in the Immigrants and Taxes study for the per capita tax payments for legal residents in Los Angeles County. George J. Borjas, Professor of Economics, University of California, San Diego Rebecca L. Clark, Program for Research on Immigration Policy, The Urban Institute, Washington, D.C. Richard Fry,* Division of Immigration Policy and Research, Bureau of International Labor Affairs, U.S. Department of Labor, Washington, D.C. Briant Lindsay Lowell,* Division of Immigration Policy and Research, Bureau of International Labor Affairs, U.S. Department of Labor, Washington, D.C. Demetrios Papademetriou,* Carnegie Endowment for International Peace, Washington, D.C. Jeffrey S. Passel, Program for Research on Immigration Policy, The Urban Institute, Washington, D.C. *Expert panel participant. In addition to those named above, the following individuals made important contributions to this report: Linda F. Baker, Senior Evaluator; Alicia Puente Cackley, Senior Economist; Steven R. Machlin, Senior Social Science Analyst; and Stefanie G. Weldon, Senior Attorney. Clark, Rebecca L. The Costs of Providing Public Assistance and Education to Immigrants. Washington, D.C.: The Urban Institute, May 1994 (revised Aug. 1994). Clark, Rebecca L., and others. Fiscal Impacts of Undocumented Aliens: Selected Estimates for Seven States. Washington, D.C.: The Urban Institute, Sept. 1994. “Cost Principles for State and Local Governments.” Federal Register, Vol. 46, No. 18. Jan. 28, 1981. Fernandez, Edward W., and J. Gregory Robinson. “Illustrative Ranges of the Distribution of Undocumented Immigrants by State.” Unpublished report, U.S. Bureau of the Census, 1994. Huddle, Donald. The Net Costs of Immigration to California. Washington, D.C.: Carrying Capacity Network, Nov. 4, 1993. Los Angeles County Internal Services Department. Impact of Undocumented Persons and Other Immigrants on Costs, Revenues and Services in Los Angeles County. Nov. 6, 1992. Passel, Jeffrey S. Immigrants and Taxes: A Reappraisal of Huddle’s “The Cost of Immigrants.” Washington, D.C.: The Urban Institute, Jan. 1994. Romero, Phillip J., and others. Shifting the Costs of a Failed Federal Policy: The Net Fiscal Impact of Illegal Immigrants in California. Sacramento, Calif.: California Governor’s Office of Planning and Research, and California Department of Finance, Sept. 1994. U.S. Bureau of the Census. Government Finances: 1990-91. Washington, D.C.: U.S. Government Printing Office. U.S. Bureau of the Census. State Government Finances: 1992. Washington, D.C.: U.S. Government Printing Office. U.S. Bureau of the Census. Statistical Abstract of the United States: 1994 (114th ed.). Washington, D.C.: U.S. Government Printing Office. U.S. Department of Education. Digest of Education Statistics: 1993. Office of Educational Research and Improvement, National Center for Education Statistics, NCES-93-292. Washington, D.C.: 1993. Warren, Robert. “Estimates of the Unauthorized Immigrant Population Residing in the United States, by Country of Origin and State of Residence: October 1992.” Unpublished report, U.S. Immigration and Naturalization Service, Apr. 29, 1994. Benefits for Illegal Aliens: Some Program Costs Increasing, But Total Costs Unknown (GAO/T-HRD-93-33, Sept. 29, 1993). Illegal Aliens: Despite Data Limitations, Current Methods Provide Better Population Estimates (GAO/PEMD-93-25, Aug. 5, 1993). Trauma Care Reimbursement: Poor Understanding of Losses and Coverage for Undocumented Aliens (GAO/PEMD-93-1, Oct. 15, 1992). Undocumented Aliens: Estimating the Cost of Their Uncompensated Hospital Care (GAO/PEMD-87-24BR, Sept. 16, 1987). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the fiscal impact of illegal aliens residing in California, focusing on: (1) the Governor of California's 1994 and 1995 budget estimates for elementary and secondary education, Medicaid benefits, and adult incarceration; (2) the estimates of revenues attributable to illegal aliens; and (3) federal efforts to improve estimates of the fiscal impact of illegal aliens residing in California. GAO found that: (1) there are limited data on California's illegal alien population's size, use of public services, and tax payments and a lack of consensus on the appropriate methodologies, assumptions, and data sources to use in estimating the costs and revenues for illegal aliens in California; (2) using the most reasonable assumptions, it adjusted California's revised estimates on the costs of elementary and secondary education and adult incarceration for illegal aliens; (3) while its overall adjusted cost estimate of $2.35 billion agreed with the state's revised estimate, the component estimates differed; (4) the estimates of revenues attributable to illegal aliens ranged from $500 million to $1.4 billion, but data limitations prevented it from judging the reasonableness of the revenue estimates; and (5) although the Urban Institute has attempted to standardize and improve states' methodologies for estimating illegal aliens' costs to the public, many differences still remain that will require further consensus.
You are an expert at summarizing long articles. Proceed to summarize the following text: Dietary supplements and other alternative medicine products are widely used by seniors. For example, as many as 4 out of 10 senior citizens have reported using herbal dietary supplements. In 2000, total U.S. sales for the herbal and specialty supplement industry reached $5.8 billion. Research suggests that some of these products show promise for mitigating symptoms associated with certain health conditions. FDA, FTC, and state agencies all have oversight responsibility for alternative medicine products. A number of surveys have been conducted to determine the proportion of the population that uses alternative medicine products. One national survey of more than 2,000 adults conducted in 1997 found that 42 percent of Americans of all ages used at least one type of alternative therapy in the prior year for conditions such as back problems, fatigue, arthritis, high blood pressure, insomnia, depression, and anxiety. The survey found that 12 percent used herbal remedies. Other studies have found that 16 to 18 percent of Americans used dietary supplements, including amino acids and over-the-counter hormones. When considering only senior citizens, surveys have generally found that as many as 40 percent of seniors used herbal and specialty supplements at some time in the previous year, with a smaller percentage reporting regular use. For example, a survey conducted in 1999 for Prevention Magazine found that 43 percent of seniors used herbal supplements and 23 percent used specialty supplements in the previous year. This study also found that one-quarter of older Americans often use herbal and specialty supplements in combination with prescription medications. A recent unpublished Harris Poll survey conducted for the Dietary Supplement Education Alliance (June and July 2001) found that 12 percent of those aged 65 or older used herbal supplements and 9 percent used specialty supplements on a regular basis. Surveys have found that many older Americans use these supplements to maintain overall health, increase energy, improve memory, and prevent and treat serious illness, as well as to slow the aging process, among other purposes. Products frequently used by seniors to address aging concerns include herbal supplements such as evening primrose, ginkgo biloba, ginseng, kava kava, saw palmetto, St. John’s wort, and valerian, and specialty supplements such as chondroitin, coenzyme Q10, dehydroepiandrosterone (DHEA), glucosamine, melatonin, omega-3 fatty acids (fish oil), shark cartilage, and soy proteins (see app. II for details regarding these substances). NIH’s National Center for Complementary and Alternative Medicine (NCCAM) has noted that preliminary evidence-based reviews suggest that some alternative therapies may have beneficial effects. These include St. John’s wort for depression, ginkgo biloba for dementia, and glucosamine and chondroitin sulfate for osteoarthritis. For example, one source stated that increased memory performance and learning capacity have been established experimentally for ginkgo biloba. One controlled study has shown positive results for ginkgo biloba in tests of cognitive performance in dementia. Similarly, some reviews have suggested that studies of glucosamine in the treatment of osteoarthritis found positive results, as did studies of St. John’s wort for depression. A systematic review of studies of St. John’s wort for depression found evidence of effectiveness in the treatment of mild to moderately severe depression, although it has also been associated with potentially dangerous interactions with prescription drugs. FDA, FTC, and state government agencies all have oversight responsibility for products marketed as anti-aging therapies. In general, the law permits FDA to remove from the market products under its regulatory authority that are deemed dangerous or illegally marketed. FDA’s regulation of dietary supplements is governed by the Federal Food, Drug, and Cosmetic Act as amended by DSHEA in 1994. DSHEA does not require manufacturers of dietary supplements to demonstrate either safety or efficacy to FDA prior to marketing them. However, if FDA subsequently determines that a dietary supplement is unsafe, the agency can ask a court to halt its sale. For dietary supplements, the Health and Human Services Secretary may declare the existence of an imminent hazard from a dietary supplement, after which the Secretary must initiate an administrative hearing to determine the matter, which may then be reviewed in court. DSHEA does not require dietary supplement manufacturers to register with FDA, or to identify to FDA the products they manufacture, and dietary supplement manufacturers are not required to provide the adverse event reports they receive to FDA. However, FDA does regulate nutritional and health claims made in conjunction with dietary supplements. FTC has responsibility for ensuring that advertising for anti-aging health products and dietary supplements is truthful and can be substantiated. FTC can ask companies to remove misleading or unsubstantiated claims from their advertising and it can seek monetary redress for conduct injurious to consumers in appropriate cases. FTC published an advertising guide for the dietary supplements industry in November 1998 that reminded the industry that advertising must be truthful and that objective product claims must be substantiated. State agencies can take action against firms that fraudulently market anti-aging and other health products. Some dietary supplements can have potentially serious health consequences for seniors. Although precise estimates of the physical harm caused to senior citizens by questionable anti-aging and alternative products are not available, there is evidence in the medical literature that seniors are at risk for adverse effects, that dietary supplements are contraindicated for individuals with some underlying health problems, and that a variety of frequently used dietary supplements can have dangerous interactions with drugs that are being taken concurrently. Although documented adverse effects from most herbal and specialty supplements are generally mild, potential complications from supplements that might be contraindicated under certain circumstances and from interactions with certain prescription medications may be serious. In addition, there is evidence that 1 in 10 herbal products may be contaminated with pesticides and heavy metals, which can have serious health consequences. Adverse event reports received by FDA and others give an indication of some possible risks. FDA has issued warnings to consumers and industry about the health risks of several dietary supplement products. Recognizing these health risks, the American Medical Association has recommended that dietary supplements and herbal remedies include specific warnings on their labels, and several trade associations representing manufacturers, suppliers, and distributors of dietary supplements have instituted voluntary programs to reduce the risk of potentially harmful products. Our review of the medical literature identified several areas where individuals, particularly seniors, may be at risk of physical harm due to adverse effects--especially if dietary supplements are used when they are contraindicated--or interactions between these dietary supplement products and prescription or over-the-counter drugs. The literature suggests that among healthy adults, most supplements when taken alone have been associated with only rare and minor adverse effects. These include stomach distress, headache, breast tenderness, restlessness, skin reactions, and hypersensitivity to sunlight. However, others are associated with more serious adverse effects. For example, the literature suggests that DHEA may increase the risk of breast, prostate, and endometrial cancer, and shark cartilage has been associated with thyroid hormone toxicity. Contraindications have been identified in the literature for several supplements. Ginseng is not recommended for persons with hypoglycemia. Kava kava may worsen symptoms of Parkinson’s disease. Saw palmetto is contraindicated for patients with breast cancer, and valerian should not be used by those with liver or kidney disease without first consulting a physician. A recent study also suggested that echinacea (promoted to help fight colds and flu), ephedra (promoted as an energy booster and diet aid), garlic, ginkgo biloba, ginseng, kava kava, St. John’s wort, and valerian may pose particular risks to people during surgery, with complications including bleeding, cardiovascular instability, and hypoglycemia. Other potential complications cited were an increase in the sedative effect of anesthetics and increased metabolism of many drugs. The literature also identifies a number of possible interactions with prescription medications. Since seniors take more prescription medicines on average than do younger adults, the risk of interactions among seniors may be higher. For example, evening primrose oil, garlic, ginkgo biloba, ginseng, glucosamine, and St. John’s wort magnify the effect of blood- thinning drugs such as warfarin or coumadin. We also identified reports suggesting that ginkgo biloba may reduce the effects of seizure medications and glucosamine may have a harmful effect on insulin resistance. An additional concern is that individuals with potentially serious health conditions may seek alternative therapies, some of which are unproven, in lieu of conventional medical therapies, and may do so without consulting their physician. For example, the Prevention Magazine survey we described earlier found that 39 percent of the respondents aged 65 or older who used a herbal supplement to prevent or treat a disease used the herbal remedy instead of an over-the-counter medication and 34 percent had tried a herbal remedy instead of a prescription medication. Surveys have also found that individuals who use alternative therapies (either in conjunction with or instead of traditional therapies) often do not discuss this fact with their physicians. For example, one survey found that only 39 percent of adults who used an alternative therapy said they informed their doctor, and a Harris poll survey found that 49 percent of respondents who used a dietary supplement informed their doctor. In addition, many respondents in that survey were found to have misperceptions about the responsible use of supplements. For example, one-third said they did not think it was necessary to follow recommended dosage guidelines. Nearly 40 percent thought they would benefit from having more information about avoiding potential adverse reactions. Commercial and scientific studies of selected dietary supplements have repeatedly found that contaminants may be present and that the amount of active ingredient present does not always match that indicated on the product label. Contaminants can pose significant health risks to consumers. Some pesticides and heavy metals, for example, are probable carcinogens and can be toxic to the liver and kidney or impair oxygen transport in the blood. One commercial laboratory found contamination in samples from echinacea, ginseng, and St. John’s wort products. As much as 20 times the level of pesticides allowable by the U.S. Pharmacopeia was found in two samples of ginseng. Overall, 11 percent of the herbal products tested were contaminated in some way. Three percent of the specialty supplement products showed signs of contamination. Some scientific studies have found that there may be significantly more active ingredient in some herbal and specialty supplement products than is indicated on the label. Amounts of active ingredients that exceed what is indicated on a product label may increase the risk of overdose for some patients. For pharmaceuticals, the tolerable range of product content is between 90 and 110 percent of the amount of active ingredient stated on the label. For example, one study of DHEA found that only 44 percent of the products sampled were within this range and one brand contained 150 percent of the amount indicated on the label. In a study of ephedra, one product was shown to have as much as 154 percent of the active ingredient indicated on the label. A study of feverfew (promoted as a migraine prophylaxis) found that 22 percent of the products tested contained more than 110 percent of what the authors considered to be the therapeutic dose of its active ingredient, in two cases doubling that amount. Studies of ginseng have found that product concentrations varied nearly fivefold across different products and that 38 percent of the products tested had more than 110 percent of the amount of active ingredient on the label, four of them containing more than twice as much. Studies of SAM-e (promoted as an antidepressant and in the treatment of the joint pain, stiffness, and inflammation associated with osteoarthritis) and St. John’s wort also found that products frequently contained more of the active ingredient than indicated on the label. This was true for 42 percent and 20 percent of the products tested, respectively. Although FDA does not determine causality in the adverse event reports it receives, it does use these reports to signal possible risks to consumers from dietary supplements. The agency also consults other sources, such as reports in the medical literature, to identify dietary supplements that may be hazardous to consumers. In 1993, FDA published a list of dietary supplements for which evidence of harm existed. In 1998, the agency also published a guide to dietary supplements, which included a list of supplements associated with illnesses and injuries. FDA has also issued warnings and alerts for dietary supplements and posted those to its Web site. The most recent alert reiterated the agency’s concern, first noted in 1993, that the herbal product comfrey represents a serious safety risk to consumers from liver toxicity. In addition, the agency has issued warnings for products including, among others, chapparal, which is promoted as an antioxidant and cancer cure and is associated with nonviral hepatitis; aristolochic acid, which is sold as “traditional medicine” and has been associated with permanent kidney damage and some cancers; and L-tryptophan, which is promoted for insomnia and depression but has been associated with an autoimmune disorder and deaths. CDC has also identified reports of adverse events associated with dietary supplements and reported them in Morbidity and Mortality Weekly Report. CDC’s report about L-tryptophan also noted that the substance has led to at least 27 deaths. Medical organizations and trade associations that represent manufacturers, suppliers, and distributors of dietary supplements recognize that some health risks are associated with these products and have made recommendations and adopted voluntary programs to address some of the concerns. For example, the American Medical Association has issued a policy statement recommending that dietary supplements and herbal remedies include the following information on the product label: “This product may have significant adverse side effects and/or interactions with medications and other dietary supplements; therefore it is important that you inform your doctor that you are using this product.” The policy statement also recommends that manufacturers be required to label products with data on adverse effects, contraindications, and possible drug interactions. Trade associations that represent various manufacturers, suppliers, and distributors of dietary supplements have adopted voluntary programs to reduce the risks of potentially harmful products. Thus, the Consumer Healthcare Products Association has established eight voluntary programs focusing on either product manufacturing or labeling of specific products. For example, the association urges manufacturers to put quality control procedures in place to ensure that ginseng is free of quintozene (a potentially carcinogenic pesticide) and related compounds. For kava kava products, member companies are asked to include specific dosage limits and cautionary statements, and for comfrey and St. John’s wort products, members are asked to include general label warnings about the advisability of consulting a physician. The American Herbal Products Association has incorporated labeling and warning recommendations in its code of ethics for members. These include, among others, labeling recommendations for ephedra (with both warnings and serving limits), warnings for chaparral and pyrrolizidine alkaloids (which are found in comfrey and can cause fatal liver failure), and dosage limits and warnings for kava kava. The association has also suggested warning labels for both saw palmetto and St. John’s wort. The National Nutritional Foods Association requires all members who manufacture dietary supplements and herbs under their own label to participate in a quality assurance program. The program was established, in part, to increase confidence that products are accurately labeled. Products are registered, with random testing for content every 2 to 3 years. Association officials reported that approximately 25,000 product labels are currently registered under this program, estimating that this accounts for more than half of the dietary supplements on the market. The association also sponsors its own good manufacturing practices program, and 23 manufacturers are currently certified. Senior citizens who buy anti-aging and alternative medicine products may spend millions of dollars on products that either make unsubstantiated claims or contain less of the active ingredient than is indicated on the label. There are no overall estimates of economic harm attributable to questionable anti-aging products; however, federal officials have identified a number of expensive products making unsubstantiated claims. In an analysis of 20 of its cases for products targeted to senior citizens, FTC estimated that consumers as a whole spent an average of nearly $1.8 million annually per company. In addition, because some dietary supplement products contain little or none of the active ingredient listed on the product label, consumers may be spending millions of dollars per year on products that are virtually worthless. FTC and FDA have identified a number of anti-aging and alternative medicine companies making unsubstantiated advertising or labeling claims for their products. FTC does not have an estimate of economic harm attributable to these products, but some of these unproven products can cost hundreds or thousands of dollars apiece. For example, rife machines, which are frequently advertised on the Internet, can cost up to $5,000, and some herbal product packages for cancer cures can cost nearly $1,000. PC-SPES, an herbal supplement being studied for prostate cancer, costs more than $400 per month. FTC provided us with a partial estimate of economic harm based on 20 cases involving companies that fraudulently marketed unproven health care products commonly used by seniors and for which national sales data were available. FTC estimated the average annual sales at $1,759,000 per company. Consumers may purchase anti-aging and alternative medicine products that contain much less active ingredient than is indicated on the product label, thereby wasting their money on worthless products. Results of commercial laboratory tests and scientific studies that analyzed product contents for active ingredient levels have shown that some dietary supplement products contain far less of that active ingredient than labeled. For some products, analyses have found no active ingredient. A series of commercial laboratory analyses of herbal products showed that 22 percent of herbal supplements, and 19 percent of specialty supplements, contained substantially less active ingredient than the amount indicated on the label. Tests on echinacea products found that two had no detectable levels, and for valerian, four products were found to have none of the active ingredient. Six SAM-e products tested had less than half of the labeled amount of active ingredient. Studies published in the medical literature have shown similar results. In an analysis of DHEA products, nearly one-fifth contained only trace amounts or no active ingredient. In analyses of garlic products, most were found to release less than 20 percent of their active ingredient. One study of ginseng found that 35 percent of the products tested contained no detectable levels of an active ingredient, and another found no detectable levels in 12 percent of the tested products. Studies of SAM-e and St. John’s wort products also found that tested samples often contained less active ingredient than indicated on the label. The potential for harm to senior citizens from health products making questionable claims has long been a concern for public health and law enforcement officials, and federal and state agencies have activities under way to protect consumers from these products. FDA and FTC sponsor programs and provide educational materials for senior citizens to help them avoid health fraud on the Internet and in other media. NIH is funding research and research centers to evaluate popular anti-aging and alternative therapies. FDA has taken various enforcement actions against firms that have violated legal requirements regarding the marketing and sales of anti-aging and alternative products, including dietary supplements, but it has not prohibited the marketing of any specific substances using its administrative rulemaking authority. FDA’s voluntary adverse event reporting system for dietary supplements has shortcomings, and proposed regulations to establish good manufacturing practices for dietary supplements are still under review by the Office of Management and Budget. Through “Operation Cure.All,” FTC is trying to stop companies from making unqualified health claims that are not supported by credible scientific evidence, and it has been joined in these efforts by FDA and other agencies. At the state level, agencies are working to protect consumers of health products by enforcing state consumer protection and public health laws, although anti-aging and alternative products have received limited attention. Both FDA and FTC sponsor education activities that focus on health fraud and seniors. For example, public affairs specialists in several FDA district offices had exhibits at senior health fairs and health conferences where they distributed educational materials on how to avoid health fraud, as well as cautionary guidance on purchasing medicines and medical products online. In addition, officials in some districts made drug safety presentations that highlighted ongoing FDA programs, including the MedWatch adverse event reporting system that consumers are encouraged to use and which encompasses drugs, biological products such as vaccines, medical devices, dietary supplements, and food products. To help consumers discriminate between legitimate and fraudulent claims, FTC publishes consumer education materials on certain frequently promoted products and services, including hearing aids and varicose vein treatments. The agency also publishes guidelines on how to spot false claims and how to differentiate television shows from “infomercials.” Federal support of research on alternative therapies is provided by NIH’s NCCAM, which has developed research programs to fund clinical trials to evaluate the safety and efficacy of some popular products and therapies. The trials are studying alternative products and therapies for conditions such as arthritis, cardiovascular disease, and neurological disorders. There are also studies, either ongoing or planned, to examine the effects of glucosamine/chondroitin, St. John’s wort, ginkgo biloba, and others. (A list of NCCAM studies on alternative therapies relevant to seniors is provided in app. III.) In addition, the agency funds specialized, multidisciplinary research centers on alternative medicine in such areas as cardiovascular disease, neurological disorders, aging, and arthritis. FDA has taken enforcement actions against firms selling anti-aging products alleged to be dangerous or illegally marketed. It has taken actions to remove from the market anti-aging products that the agency found were actually unapproved new drugs or medical devices and actions against firms that promoted their dietary supplements for the treatment or cure of a disease. Although DSHEA allows FDA to remove from the market dietary supplements that the agency can prove are dangerous, the agency has not prohibited the marketing of any specific substances using its administrative rulemaking authority. However, the agency has taken steps to identify for consumers and industry ingredients it deems to be unsafe and unlawful. The agency has then pursued cases against specific manufacturers and products when the ingredients continued to be marketed in dietary supplements despite the agency’s warnings. FDA’s efforts in this regard have been unsuccessful, and many of these products remain on the market and are still available to consumers. A description of some of FDA’s recent enforcement activities is provided in appendix IV. FDA enforcement actions taken against products that it judged to be unapproved drugs or medical devices include court cases filed to halt distribution of laetrile products that claimed to cure cancer and to halt the sale of “Cholestin,” a red yeast rice product with lovastatin that was marketed with cholesterol-lowering claims. FDA also took action to halt the marketing of the “Stimulator,” a device that the manufacturer claimed would relieve pain from sciatica, swollen joints, carpal tunnel syndrome, and other chronic conditions. The devices have been purchased by many senior citizens, according to FDA officials. An estimated 800,000 of these devices were sold between 1994 and 1997. FDA has notified some dietary supplement manufacturers that their promotional materials have illegally claimed that their products cure disease, but some of these products are still available. For example, some manufacturers of colloidal silver products have claimed efficacy in treating HIV and other diseases and conditions. Even though FDA banned colloidal silver products as a U.S. over-the-counter drug in September 1999, after concluding that it was not aware of any substantial scientific evidence that supported the disease claims used in marketing the products, colloidal silver products may still be marketed as dietary supplements as long as they are not promoted with claims that they treat or cure disease. FDA sent several dozen “cyber-letters” by electronic mail to Internet-based companies making such claims stating that their therapeutic claims may be illegal. Despite these oversight activities, colloidal silver products claiming “natural antibiotic” properties to address numerous health conditions remain available. FDA has not initiated any administrative rulemaking activities to remove from the market certain substances that its analysis suggests pose health risks, but has sought voluntary restrictions and attempted to warn consumers. For example, aristolochic acid, a known potent carcinogen and nephrotoxin, is believed to be present, in certain traditional herbal remedies as well as a number of dietary supplement products. Following reports of aristolochic-acid-associated renal failure cases in Europe, FDA has recently taken several steps. In May 2000, FDA issued a “letter to industry” urging leading dietary supplement trade associations to alert member companies that aristolochic acid had been reported to cause “severe nephropathy in consumers consuming dietary supplements containing aristolochic acid.” This letter also advised the industry that FDA had concluded that any dietary supplement that contained aristolochic acids was adulterated under the law and that it was unlawful to market such a product. At the same time, the agency issued an import bulletin (later converted to an import alert) that prohibited the importation of bulk and finished products that may contain aristolochic acids until the importer could provide direct analytical evidence that the product was free of these substances. In April 2001, the agency issued a new industry letter and consumer warning after its analysis of marketed products found that many contained aristolochic acids. This letter reiterated the agency’s conclusion that the marketing of such products was unlawful and that manufacturers needed to take steps to ensure that aristolochic-acid- containing products do not find their way into the marketplace. FDA pointed to another safety risk for consumers using herbal medicines in July 2001, when the agency announced that herbal comfrey products containing pyrrolizidine alkaloids may cause liver damage. The agency’s letter to eight leading dietary supplement trade associations urged them to advise their members to stop distributing comfrey products containing pyrrolizidine alkaloids. However, even though FDA has told firms that market dietary supplements that products that contain comfrey are adulterated and unlawful, some firms continue to market them, and the agency is left to identify and take action to remove them on a case-by-case basis as it becomes aware of them. As we reported in 1999, FDA’s adverse event reporting system for dietary supplements receives reports for only a small proportion of all adverse events, and the reports it receives are often incomplete. FDA’s adverse event reporting system for dietary supplements is a voluntary postmarketing surveillance system. There is no statutory requirement that dietary supplement manufacturers provide adverse event reports they receive to FDA. For example, we found that documents disclosed in a recent court case showed that a manufacturer of a product containing ephedra had received more than 1,200 complaints of adverse events related to its product; FDA told us that it was aware of few, if any, of these reports before the lawsuit was filed. Similarly, a 2001 report by the HHS Office of Inspector General noted that FDA’s reporting system fails to capture sufficient data on medical information, product information, and manufacturer information. For example, FDA told us that 12 percent of the dietary supplement adverse event reports that included consumer age that it has received since 1994 were for senior citizens, but that many of the reports did not contain information about the age of the consumer. FDA inspects relatively few dietary supplement manufacturers and related facilities. FDA told us that the agency inspected 61 manufacturers and repackers of dietary supplements in 1999, and 53 in 2000. In 2001, 80 inspections are planned. The agency does not know precisely how many facilities are operating, because there is no registration requirement. However, FDA estimates that there are more than 1,500 facilities, suggesting that FDA inspects less than 5 percent of facilities annually. FDA officials told us that its inspectors look at sanitation, buildings and facilities, equipment, production, and process controls. In 1997, FDA published an advance notice of proposed rulemaking regarding good manufacturing practice (GMP) in manufacturing, packing, and holding of dietary supplements. In publishing the draft for comment, FDA noted that much of the dietary supplement industry believes GMP regulations are important in establishing standards to ensure that dietary supplements are “safe and properly labeled.” FDA officials have stated that a proposed GMP rule has now been developed and is still under review by the Office of Management and Budget. Publication of final GMP regulations will improve FDA’s enforcement capabilities, since DSHEA provides that dietary supplements not manufactured under conditions that meet GMPs would be considered adulterated and unlawful. As part of its consumer protection activities, FTC enforces federal statutes that prohibit misleading and unsubstantiated advertising. In recent years, FTC has joined with other organizations to focus attention on the fraudulent marketing of some anti-aging and other alternative medicine products. In 1997, FTC launched an effort to find companies making questionable claims for health products on the Internet, as well as in other media. This initiative, which later became known as “Operation Cure.All,” primarily involved conducting Internet-based searches to identify Internet sites making unsubstantiated claims that use of their products would prevent, treat, or cure serious diseases and conditions. The searches were conducted with the participation of FDA, CDC, and some state attorneys general and other organizations. Evaluations of “Operation Cure.All” have found that some companies have made changes in their Internet advertising as a result of receiving e-mail alerts from FTC about potentially unsupported advertising claims. In 1997, an estimated 13 percent of notified companies withdrew their claims or Web site, while 10 percent made some changes. In 1998, an estimated 28 percent of notified companies withdrew their claims or Web site, while 10 percent made some changes. By comparison, the percentage of companies that made no changes in both years exceeded 60 percent. In addition, FTC identified for us 15 “Operation Cure.All” cases brought by the FTC against companies and individuals making claims for products or services that were not backed by “competent and reliable” scientific evidence. In total, FTC has brought over 30 dietary supplement cases since the agency released guidelines on its approach to substantiation of advertised claims in 1998. A list of relevant cases from FTC enforcement efforts is provided in appendix V. A majority of “Operation Cure.All” cases have been settled administratively, with the companies agreeing to stop making unsupported disease treatment claims in advertising materials, with some calling for consumer redress. For example, FTC sued Lane Labs-USA for representing that its shark cartilage products could cure cancer. The company agreed in June 2000 to stop making these claims and to pay a $550,000 fine to FTC and $450,000 to be used for purchasing shark cartilage and placebo products to be tested in a clinical trial sponsored by NIH. In an ongoing “Operation Cure.All” case involving a company that markets various herbal packages as well as a device known as the “Zapper Electrical Unit,” FTC’s complaint seeks consumer redress and a permanent injunction against false and misleading claims. The fourteen states we contacted varied in their efforts to protect consumers from fraudulent or harmful health products, but in general focused little attention on anti-aging and alternative medicine products. State agencies reported that they receive relatively few complaints regarding these products. However, many officials said that consumers are being harmed in ways that are unlikely to be reported to state agencies and that misleading advertising and questionable health products are serious problems. States have identified a number of questionable health care products, services, and advertising claims that may affect older consumers, and these are listed in appendix VI. States protect consumers from fraudulent or harmful health products through two approaches. The first is enforcement of state consumer protection laws against false or misleading advertising. The second is through their public health authority to ensure food, drug, and medical device safety. With some exceptions, the states we contacted take action only if there is a pattern of complaints or an acute health problem associated with a particular substance or device. Seven of the fourteen states we contacted were involved to some degree in monitoring or enforcement activity, and three have ongoing efforts to review advertising, labels, or products to enforce their health and consumer protection laws. In the states we contacted, oversight of anti-aging product advertising has not been a priority for state consumer protection agencies. Although many representatives of state consumer protection agencies we contacted said that misleading advertising of health products targeted to seniors is a serious issue, their agencies have devoted greater resources to larger scale types of fraud such as identity theft and sweepstakes fraud. State laws protecting consumers from false or misleading advertising may be applied to anti-aging and alternative remedies for which complaints have been filed. There is often a multi-tiered approach to resolving consumer complaints. Individual complaints may be filed with an office, usually within the Attorney General’s or Governor’s Office, that facilitates informal resolution between the consumer and the company. Consumers wishing to pursue legal remedies beyond this point are generally referred to a private attorney. In cases in which there are patterns or egregious cases of deceptive advertising, the Attorney General’s consumer complaint division may pursue administrative or legal remedies against the company. Although many state consumer protection agencies are monitoring cases to see if patterns of deceptive advertising are developing that could warrant full investigation in the future, such patterns may be difficult to determine for a number of reasons. None of the consumer protection officials we contacted receives a high volume of complaints about health-related products. State consumer protection Web sites often refer consumers to a variety of resources, including the FTC, FDA, and Better Business Bureaus, and thus the agencies are not likely to have a comprehensive picture of all consumer complaints. Most of the consumer protection agencies we contacted could not search their complaint databases to obtain counts of complaints about health-related products, and none of the agencies we contacted was able to provide us with counts of complaints. Some states consider the content of advertisements for health products to be a matter for federal authorities, whereas other states specifically regulate such advertising. For example, Ohio’s consumer sales practices act covers an advertisement’s claims about the sales transaction—such as price or quantity—not the content of statements about a product’s effectiveness. In contrast, Iowa has a special provision in its consumer protection law for additional penalties when false advertising is targeted at seniors. In all but one of the states we contacted, public health officials were either unable to obtain data on adverse health events resulting from anti-aging or alternative health products or have received few, if any, reports of relevant cases in the recent past. Some noted that the health department may learn of an acute event linked to a dietary supplement but that the more subtle forms of harm typically go unreported. With regard to seniors, officials are particularly concerned about supplements that make unsubstantiated claims to cure disease. Officials believe that when a product such as a dietary supplement does not achieve the promised effect, many people simply stop using it or return it to the retailer rather than notify state authorities. Public health laws allow state and local authorities to take action against adulterated, misbranded, or dangerous products. Some states have provisions in their food and drug safety laws that incorporate federal standards. Health authorities in the states we contacted are active to varying degrees in regulating questionable health products. In three of the states we contacted, consumer protection, law enforcement, or public health officials routinely review labels and advertising in a variety of media to determine if they are false or misleading. In several other states, authorities have ongoing investigations stemming from consumer complaints. Investigators may contact the company, review documentation that it submits in support of its claims, conduct inspections, and obtain expert analysis of products. Remedies can include restitution for consumers, fines, court orders to change or remove false claims or to prohibit the sale of misbranded products in the state, and seizure of harmful products. The risk of harm to seniors from anti-aging and alternative health products has not been specifically identified as a top public health priority or a leading enforcement target for federal and state regulators. However, evidence demonstrates that many senior citizens use anti-aging products and that consumers who suffer from aging-related health conditions may be at risk of physical and economic harm from some anti-aging and alternative health products, including dietary supplements, that make misleading advertising and labeling claims. The medical literature has identified products that are safe under most conditions but contraindicated for consumers with certain health conditions. Other products, such as St. John’s wort, hold promise as potential treatments for some conditions but are also associated with adverse interactions with some prescription medications. Senior citizens may have a higher risk of physical harm from the use of anti-aging alternative medicine products because they have a high prevalence of chronic health conditions and consume a disproportionate share of prescription medications compared to younger adults. FDA, FTC, and NIH gave us technical comments on the portions of a draft of this report that addressed their respective activities. We have incorporated their suggestions where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the Secretary of Health and Human Services and others who are interested. We will also provide copies to others upon request. If you or your staff have any questions, please contact me at (202) 512-7119. Another contact and major contributors to this report are listed in appendix VII. We began our work by attempting to identify alternative medicine products marketed as anti-aging therapies that present health and economic risks to seniors. We asked experts in this area which products pose the greatest risk to seniors. Most of the responses we received concerned potential problems with dietary supplements (both herbal and specialty supplements), and occasionally some potentially harmful devices. We did not hear widespread concerns regarding alternative medical services. Therefore, our work focused principally on those herbal and specialty supplements and devices that address health conditions related to aging, such as heart disease, memory loss, fatigue, joint health, and cancer. We reviewed scientific literature and talked with medical and scientific experts, trade association representatives, consumer group representatives, individual practitioners, and researchers. Our investigation of adverse effects, contraindications, and interactions focused primarily on those supplements that were most commonly used by seniors to address issues of aging as identified in a recent survey by Prevention Magazine. We conducted analyses of the data from this survey to focus on the use of dietary supplements by people aged 65 years or older. We also interviewed officials and reviewed documents from the Food and Drug Administration (FDA), Federal Trade Commission (FTC), and National Institutes of Health (NIH). From FDA, we obtained all adverse event reports from 1994 through 2001 reported by people over 65 years old, as well as all reports for most of the dietary supplements mentioned in our report. We examined other FDA and FTC documents to identify warnings that the agencies have issued against certain products because of concerns about safety, labeling, or advertising. We obtained case information from FTC and FDA to determine estimates of economic harm, as well as to review the agencies’ enforcement efforts. We also interviewed state attorneys general and public health officials in 14 states to examine enforcement efforts at the state level. These states were selected because they were identified by experts as being the most active in their efforts to curb the marketing and sale of health products making questionable claims. (Table 1 lists the organizations we consulted.) We focused our review on those herbal and specialty supplements that a recent survey by Prevention Magazine found were most frequently used by senior citizens for conditions associated with aging. For each of those supplements, we have listed in table 2 the health claims frequently associated with the products, although we have not attempted to validate the merits of any of the claims. We also list adverse effects that have been associated with the supplements, conditions for which the supplements might be contraindicated, and prescription medications with which the supplements might have dangerous interactions. The National Center for Complementary and Alternative Medicine (NCCAM) supports research to test the safety and efficacy of a variety of complementary and alternative medicine modalities. Some of this research focuses on health issues that are relevant to senior citizens, such as arthritis and cancer (see table 3). In fiscal year 2000, appropriations for NCCAM totaled $68.3 million. Additional expenditures by other NIH Institutes and Centers brought the agency’s commitment to complementary and alternative medicine to $161 million for fiscal year 2000. In fiscal year 2001, NCCAM’s appropriations increased 29 percent to approximately $89 million. In addition, NCCAM funds a variety of specialized research centers that serve as focal points for initiating and maintaining state-of-the-art multidisciplinary research on complementary and alternative medicine. Some of these focus on issues specifically relevant to older Americans: The center for cardiovascular diseases at the University of Michigan is examining the effect of hawthorn (an herbal supplement) in the treatment of heart failure; the effect of Reiki (a natural energy therapy) on diabetes and cardiovascular autonomic function; and the effect of Qi gong (a Chinese practice that combines movement, meditation, and regulation to enhance the flow of energy) and spirituality and psychosocial factors on wound closure, pain, medication usage, and hospital stay in postoperative cardiac patients. The center for neurological disorders at Oregon Health Sciences University is examining the effectiveness of three antioxidant regimens in decreasing multiple sclerosis disease activity, ginkgo biloba in the prevention or delay of cognitive decline in elderly patients, hatha yoga on cognitive and behavioral changes associated with aging and neurological disorders in multiple sclerosis, and vitamin E and ginkgo biloba in reducing oxidative end-products. The center on arthritis at the University of Maryland is investigating the effectiveness of acupuncture for the treatment of osteoarthritis of the knee, the effectiveness of mind-body therapies for fibromyalgia, the effects of electroacupuncture on persistent pain and inflammation, and the mechanism of an herbal combination with immunomodulatory properties. The center on aging at Columbia University is investigating the influence of a macrobiotic diet on endocrine, biochemical, and cardiovascular parameters; whether phytoestrogens influence bone metabolism in postmenopausal women; whether black cohosh (an herbal supplement) reduces the frequency and intensity of menopausal hot flashes and other menopausal symptoms; and the biological activities and mechanisms of a Chinese herbal formula on breast cancer cells. The center for the study of minority aging and cardiovascular disease at the Maharishi University of Management focuses on a form of Ayurvedic Indian medicine that incorporates herbal formulations and medication in older blacks. Specific studies focus on the basic mechanisms of meditation and cardiovascular disease in older blacks, the effect of transcendental meditation on reducing hypertension, and the effects of herbal antioxidants on cardiovascular disease in older blacks. FDA identified actions it has taken in response to products making illegal claims that were targeted at least in part toward senior citizens. These are listed in table 4. FTC identified examples of actions it has taken in response to products with illegal advertisements and that were targeted at least in part to senior citizens. These are listed in table 5. State officials we talked with described a number of products used by seniors that were questionable or had questionable advertising and where state action was taken, including the following: Companies marketing therapeutic magnets with unsubstantiated claims that they can cure a variety of diseases, including diabetes and osteoporosis. One state health department official noted that some magnet companies are aware that they can only be sold for general well- being, but in other states they continue to be marketed with health claims. A national mail-order company selling a variety of health products with unsubstantiated promises to cure prostate disease, bladder problems, hair loss, skin discoloration, sexual impotence, and cancer. Authorities in one state have obtained restitution for consumers, civil fines, and court orders to keep this and similar companies from selling fraudulently advertised products, including anti-aging formulas, to its residents. However, the authorities have no means to prevent the company from publishing misleading material and selling questionable products in the rest of the country. A Web-based company advertising a breast-enhancement product to women who have had mastectomies with the claim that it can regenerate lost breast tissue. The company removed its claim after being contacted by a state attorney general’s office. Herbal remedies contaminated with toxic substances such as heavy metals. Authorities in one state have analyzed products and found cases of adulteration in imported ingredients used in traditional Chinese medicine. Herbal supplements “spiked” with prescription drugs or synthetic ingredients. In one case a diabetic had to seek medical treatment after taking an herbal product that contained a prescription diabetes drug. Another product marketed as an “all-herbal” weight-loss formula using ephedra was found to contain ephedrine, a synthetic substance not listed on the label. Concerns about the potential health hazards associated with ephedra have prompted several states to issue regulations restricting its dosage or sale. Testing blood and hair samples to diagnose nutritional deficiencies or illnesses to induce people to buy a particular dietary supplement as treatment. In one state, there is an ownership link between the out-of- state laboratories doing the analyses and the manufacturer of the dietary supplements. Carolyn Feis Korman, Anne Montgomery, Mark Patterson, Roseanne Price, and Suzanne Rubins also made major contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to our home page and complete the easy-to-use electronic order form found under “To Order GAO Products.” Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: [email protected], or 1-800-424-5454 (automated answering system).
Evidence from the medical literature shows that a variety of frequently used dietary supplements marketed as anti-aging therapies can have serious health consequences for senior citizens. Some seniors have underlying diseases or health conditions that make the use of the product medically inadvisable, and some supplements can interact with medications that are being taken concurrently. Furthermore, studies have found that products sometimes contain harmful contaminants or much more of an active ingredient than is indicated on the label. Unproven anti-aging and alternative medicine products also pose an economic risk to seniors. The Food and Drug Administration (FDA) and the Federal Trade Commission (FTC) have identified several products that make advertising or labeling claims with insufficient substantiation, some costing consumers hundreds or thousands of dollars apiece. Federal and state agencies have efforts under way to protect consumers of these products. FDA and FTC sponsor programs and provide educational materials for senior citizens to help them avoid health fraud. At the state level, agencies are working to protect consumers of health products by enforcing state consumer protection and public health laws, although anti-aging and alternative products are receiving limited attention. GAO summarized this report in testimony before Congress (GAO-01-1139T).
You are an expert at summarizing long articles. Proceed to summarize the following text: For each fiscal year, the District is required under P.L. 103-373 to develop and submit to Congress a statement of measurable and objective performance goals for all significant activities of the District government. After each fiscal year, the District is to report on its performance. The District’s performance report is to include: a statement of the actual level of performance achieved compared to each of the goals stated in the performance accountability plan for the year, the title of the District of Columbia management employee most directly responsible for the achievement of each goal and the title of the employee’s immediate supervisor or superior, and a statement of the status of any court orders applicable to the District of Columbia government and the steps taken by the government to comply with such orders. Last year, on two occasions, we highlighted the challenges faced and progress made by the District in implementing a sound performance management system. In April 2000, we reported that the District’s first performance report, covering fiscal year 1999, lacked some of the required information. Specifically, the performance report did not contain (1) performance data for most of its goals, (2) the titles of managers and their supervisors responsible for each of the goals, and (3) information on any of the court orders applicable to the District government during fiscal year 1999. Also, it did not cover all significant District activities. In October 2000, we testified before the Subcommittee on Oversight of Government Management, Restructuring and the District of Columbia, Senate Committee on Governmental Affairs, that the District had made progress in defining clear goals and desired outcomes through its strategic planning efforts. However, we also said that there were still opportunities to more fully integrate various aspects of its planning process and ensure that performance information was sufficiently credible for decision- making and accountability. Our objectives were to ascertain the extent to which the District’s fiscal year 2000 report was useful for understanding the District’s performance in fiscal year 2000 and the degree to which it complies with its statutory reporting requirements. To determine if the performance assessment itself could provide a useful characterization of the District’s fiscal year 2000 performance, we conducted a process evaluation. This included identifying the components of the process used to develop goals and measures, the agencies included, when the goals were revised, and whether the final goals were developed in a timely manner to allow valid performance assessment during fiscal year 2000. To determine if the report complied with reporting requirements, we compared the report contents to the legislatively mandated requirements. To acquire additional information and verify our findings, we interviewed a key District official responsible for coordinating the performance assessment. We conducted our work from March through May 2001 at the Office of the Mayor of the District of Columbia in accordance with generally accepted government auditing standards. We did not verify the accuracy or reliability of the performance data included in the District’s report. We provided a draft of this report to the Deputy Mayor/City Administrator of the District of Columbia for review and comment. Comments are reflected in the agency comments section of this report. In accordance with requirements established in P.L. 103-373, we consulted with a representative for the Director of the Office of Management and Budget concerning our review. The District’s performance report reflects a performance management process that led to goals continually changing throughout fiscal year 2000 as the District worked to improve the process. The performance plan (initial goals) for fiscal year 2000 was submitted to Congress in June 1999 along with the District’s budget. The District subsequently implemented what became an iterative approach for developing new goals and revising existing goals for about 20 “critical” agencies. That is, in addition to establishing initial performance goals, the District developed (1) agency strategic plans, (2) performance contracts, and (3) a Mayor’s Scorecard for each of the critical agencies. The performance goals generated as part of these efforts were developed during the period March 1999 through March 2000. These initiatives led to the development of the set of goals that the District considered as its final fiscal year 2000 goals for each of the critical agencies. For example, the Department of Health extensively revised its initial five goals. After going through various planning exercises, the department eliminated three of the initial goals, combined the remaining two goals under one broader final goal, and added seven completely new final goals. The initial goals of the noncritical agencies changed during the fiscal year, but without going through the same process as that for the critical agencies. The District official responsible for coordinating the fiscal year 2000 performance assessment estimated that between 30 to 40 percent of the noncritical agencies’ goals were revised over the fiscal year 2000 performance assessment period. Although, some goals were finalized earlier, the set of final fiscal year 2000 goals for all agencies, whose performance was assessed, were submitted to Congress along with the District’s fiscal year 2001 budget in June 2000. One result of this process to redefine goals was that 54 percent of the initial goals were not used as final goals. For example, the Department of Motor Vehicles’ goal to seek out regular feedback on the level and quality of service was not used as a final fiscal year 2000 goal. Although the department developed several final goals related to improving customer service, such as wait times for vehicle registration, it did not continue the goal to obtain feedback directly from its customers. No explanation was provided in the report to explain why the goal was dropped or whether it had been achieved. Many of the remaining 46 percent of the original goals were significantly revised by the time the District issued its report, making it difficult to determine the degree to which the original goals were achieved. District officials have indicated they plan to use an approach similar to fiscal year 2000’s for determining performance goals and measures in succeeding years. That is, they plan to define each fiscal year’s goals and measures during the fiscal year in which performance is being assessed. They expect that performance goals and measures will not stabilize into a consistent set until fiscal year 2003. The District’s changing goals are reflected in its the fiscal year 2000 Performance Accountability Report, which provides information for three sets of performance goals. It provides information regarding the disposition of initial fiscal year 2000 goals. That is, the report indicates which goals made it into the final set used to assess fiscal year 2000 performance and which of the remaining initial goals, which were not considered by the District to be part of its final fiscal year 2000 goals, were nevertheless achieved. The second set of performance goals that are addressed in the report are those developed for the Mayor’s Scorecard. The goals in the Mayor’s Scorecard were developed to address priorities set by residents at the District’s Citizen Summit and the Neighborhood Action Forum. The last set of goals addressed in the report are the District’s final goals, which were included with the District’s fiscal year 2001 budget submittal to Congress in June 2000. The lack of information on the extensive revisions that the District made to its performance goals, measures, and plans, limit the usefulness of the subsequent performance report for purposes of oversight, transparency, accountability, and decision-making. Our review of federal agencies’ efforts to implement GPRA have shown that while it can be beneficial to periodically reassess and revise goals, it is also important that annual performance plans and reports provide clear information about the reasons for these changes when they occur. This information helps provide assurance that changes were intended to improve performance management rather than obfuscate weak performance; that is, that the changes were from developmental bias. Consistent with our findings, OMB’s guidance to federal agencies on the submission of GPRA plans and reports states that goals should be periodically modified as necessary to reflect (1) changes in programs, (2) agency capability to collect and report information, and (3) the importance and usefulness of any goal. All three of these factors are valid reasons to change goals. However, the District’s performance report does not indicate if any of these or other factors were a basis for the extensive revisions made to goals during fiscal year 2000. In addition, the report does not discuss steps taken to ensure that reported performance data were complete, that is, represented the entire fiscal year. For example, the Department of Parks and Recreation added a new goal to improve the safety, cleanliness, and accessibility of its facilities. However, it is not clear whether data on the District’s efforts to address safety findings (within 48 hours) was collected for the entire fiscal year. According to an official responsible for coordinating the performance assessment, the District cannot ensure that the reported data represented the entire year’s performance for any of the agencies; the official indicated one would have to go back and check with each individual agency to determine whether they were complete. The concerns we raise are consistent with problems identified by the District Office of the Inspector General in a report published in March 2001. The Inspector General conducted a review to, in part, verify the data supporting the reported achievements regarding the fiscal year 2000 performance contracts and the Mayor’s Scorecard goals. One of the Inspector General’s conclusions was that agencies did not maintain records and other supporting documentation for the accomplishments they reported and that the Office of the City Administrator did not provide sufficient guidance to address that problem. In response to the Inspector General’s finding, the Office of the City Administrator said it recognized the need for standard procedures, and it plans to issue performance review guidelines by the end of the summer 2001. Finally, regarding initial goals that were not carried over to the final set used to assess fiscal year 2000 performance, many are identified in the performance report as having been achieved. However, none of these goals had performance data provided for them. Therefore, the specific performance level at which these goals were met cannot be determined, that is, whether successful performance was marginal or otherwise. For example, the District had a goal of improving the response time for all legal services provided by the Office of the Corporation Counsel. The District’s report indicates that the goal was achieved, but because no data were provided, it is impossible to know precisely how and to what extent the agency improved its response time. The District’s performance report does not cover all significant District activities as required; thus, the performance report does not provide a comprehensive snapshot of the District government’s performance. For example, the report does not cover the performance of the District’s public schools, which account for more than 15 percent of the District’s budget. More important, the schools are responsible for a core local government function—providing primary education. The District’s performance report acknowledges this critical gap in coverage and says that subsequent reports beginning with the fiscal year 2001 report will more fully meet the statutory requirements. The District’s fiscal year 2000 Performance Accountability Report improved in two areas of compliance compared to last year’s report. First, the report provides the titles of program managers and their supervisors. The performance report is to include the title of the District of Columbia management employee most directly responsible for the achievement of each goal and the title of the employee’s immediate supervisor or superior. The District’s performance report provides the information for the final goals and goals contained in the Mayor’s Scorecard. This is an improvement over last year’s report, which contained no such information. Second, the performance report also includes information concerning court orders assigned to the government of the District of Columbia during the year and the steps taken by the government to comply with such orders. Specifically, the District’s performance report provides the status for each of the 12 court orders by describing and identifying whether or not they were in effect in fiscal year 2000 and fiscal year 2001. For example, in the case of Joy Evans v. DC, the court required the District to improve the habilitation, care, and treatment for mentally handicapped residents. The report indicates that this court order was in effect in fiscal year 2000 and will continue to be in effect in fiscal year 2001. The report also provides information on the actions taken to comply with the orders. For example, in the case of Twelve John Does v. DC, the report clearly identifies the actions taken to address issues at the District’s Central Detention Facility. The report states that cell doors are being repaired, ventilation systems are being replaced, environmental matters are being corrected, and additional staff are being added to address security needs. In addition, the report states that the facility is scheduled to close on or before December 31, 2001. The information provided by the District on court orders is an improvement over last year when, due to an oversight in compiling its fiscal year 1999 performance report, the District failed to report on any of the applicable court orders. The District’s fiscal year 2000 performance report is an improvement over the previous year’s in that it meets some of the statutory requirements that the previous report did not. However, the extensive changes that the District made to its fiscal year 2000 performance goals during the fiscal year undermine the usefulness of the resulting report because the District did not include critical information needed by Congress and other stakeholders. Such information, identifying how, when, and why specific goals were altered and the decision-making and accountability implications of those changes, is important to Congress and others so that they can have confidence in the validity and completeness of the reported performance data. In addition, the report does not cover all significant activities of the District government. Sustained progress is needed to address the critical performance and other management challenges that the District faces. The District recognizes the shortcomings with its performance management efforts and has stated a commitment to addressing them. The effective implementation of the various initiatives underway in the District is vital to the success of the District’s efforts to create a more focused, results- oriented approach to management and decision-making—an approach that is based on clear goals, sound performance and cost information, and a budget process that uses this information in allocating resources. To further strengthen the District’s performance management process and provide more useful information to its citizens and Congress, we recommend that the Mayor of the District of Columbia: Accelerate efforts to settle upon a set of results-oriented goals that are more consistently reflected in its various planning, reporting, and accountability efforts. Provide specific information in its performance reports for each goal that changed, including a description of how, when, and why the change occurred. In addition, the District should identify the impact of the change on the performance assessment itself, including data collection and measurement for the reporting period. Include in each year’s accountability report the performance of all significant activities of the District. On May 31, 2001, we received e-mail comments on our draft report on behalf of the Deputy Mayor/City Administrator. He stated that overall, he concurred with our findings, appreciated the context in which they were presented, and acknowledged that additional work is needed to make the District’s performance management system serve the needs of its citizens and Congress. The Deputy Mayor acknowledged that the extent of changes and the lack of discussion in the performance report about why specific goals were changed hinder comparison of the District’s performance against its initial goals. In addition, he said that using the goals that resulted from the development of agency strategic plans was more representative of the District’s performance during fiscal year 2000 than the initial goals. We agree with both of these points. Our central point, however, was that given the timing and extent of goal revision, and the absence of a discussion about those changes, the usefulness of the report for understanding performance as measured against the final goals, is limited. The Deputy Mayor said that the information we reported on the timing of the final set of agency goals appears to exaggerate the amount of time that agency goals were in a state of flux—leading to the impression that all of the District’s goals were changing until June 2000. We report that goals for the critical agencies were finalized by March 2000 and that goals for other (noncritical) agencies were revised at other times; the District could not specify when these goals were finalized. It could only suggest that 30 to 40 percent of these agencies’ goals were revised. However, we revised our report to reflect that although some goals were finalized earlier, they were not submitted to Congress until June 2000. In response to our recommendation that the District accelerate efforts to settle upon a consistent set of goals, the Deputy Mayor said that the District anticipates consolidating its goals during the fiscal year 2003 planning, budgeting, and reporting cycle. He further stated that goals for fiscal years 2001 and 2002 are likely to change as the District updates its agency-specific and citywide strategic plans in the summer of 2001. As we note in this report, it can be beneficial to periodically reassess and revise goals. However, it is critical that the District makes every effort to accelerate the process of settling upon its final goals early in a fiscal year to ensure that the performance assessment and report are meaningful. The Deputy Mayor concurs with our recommendation that specific information should be provided in the District’s performance reports for each goal that changed. The Deputy Mayor also concurs with our recommendation to include in each year’s accountability report the performance of all significant activities of the District. He said that the District will seek to expand the coverage of its fiscal year 2001 report to more fully comply with its mandated reporting requirements. He also stated that although the District cannot compel independent agencies not under the authority of the Mayor (including the D.C. Public Schools) to report on performance, it plans to work with them in developing performance information. We are sending copies of this report to the Mayor of the District of Columbia. Copies will be made available to others upon request. Key contributors to this report were Kathy Cunningham, Chad Holmes, Boris Kachura, and Bill Reinsberg. Please contact me or Mr. Kachura on (202) 512-6806 if you have any questions on the material in this report. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
The District of Columbia's fiscal year 2000 performance report is an improvement in that it meets some of the statutory requirements that the previous year's report did not. However, the extensive changes that the District made to its fiscal year 2000 performance goals during the fiscal year undermine the report's usefulness because the District did not include critical information needed by Congress and other stakeholders. Such information, identifying how, when, and why specific goals were altered and the decision-making and accountability implications of those changes, is important to Congress and others so that they can have confidence in the validity and completeness of the reported performance data. Also, the report does not cover all significant activities of the District government. Sustained progress is needed to address the critical performance and other management challenges that the District faces. The District recognizes the shortcomings with its performance management efforts and has stated a commitment to addressing them. The effective implementation of the various initiatives underway in the District is vital to the success of the District's efforts to create a more focused, results-oriented approach to management and decision-making--an approach that is based on clear goals, sound performance and cost information, and a budget process that uses this information in allocating resources.
You are an expert at summarizing long articles. Proceed to summarize the following text: Since the early 1990s, the explosion in computer interconnectivity, most notably growth in the use of the Internet, has revolutionized the way organizations conduct business, making communications faster and access to data easier. However, this widespread interconnectivity has increased the risks to computer systems and, more importantly, to the critical operations and infrastructures that these systems support, such as telecommunications, power distribution, national defense, and essential government services. Malicious attacks, in particular, are a growing concern. The National Security Agency has determined that foreign governments already have or are developing computer attack capabilities, and that potential adversaries are developing a body of knowledge about U.S. systems and methods to attack them. In addition, reported incidents have increased dramatically in recent years. Accordingly, there is a growing risk that terrorists or hostile foreign states could severely damage or disrupt national defense or vital public operations through computer-based attacks on the nation’s critical infrastructures. Since 1997, in reports to the Congress, we have designated information security a governmentwide high-risk area. Our most recent report in this regard, issued in January, noted that, while efforts to address the problem have gained momentum, federal assets and operations continue to be highly vulnerable to computer-based attacks. To develop a strategy to reduce such risks, in 1996, the President established a Commission on Critical Infrastructure Protection. In October 1997, the commission issued its report, stating that a comprehensive effort was needed, including “a system of surveillance, assessment, early warning, and response mechanisms to mitigate the potential for cyber threats.” The report said that the Federal Bureau of Investigation (FBI) had already begun to develop warning and threat analysis capabilities and urged it to continue in these efforts. In addition, the report noted that the FBI could serve as the preliminary national warning center for infrastructure attacks and provide law enforcement, intelligence, and other information needed to ensure the highest quality analysis possible. In May 1998, PDD 63 was issued in response to the commission’s report. The directive called for a range of actions intended to improve federal agency security programs, establish a partnership between the government and the private sector, and improve the nation’s ability to detect and respond to serious computer-based attacks. The directive established a National Coordinator for Security, Infrastructure Protection, and Counter-Terrorism under the Assistant to the President for National Security Affairs. Further, the directive designated lead agencies to work with private-sector entities in each of eight industry sectors and five special functions. For example, the Department of the Treasury is responsible for working with the banking and finance sector, and the Department of Energy is responsible for working with the electric power industry. PDD 63 also authorized the FBI to expand its NIPC, which had been originally established in February 1998. The directive specifically assigned the NIPC, within the FBI, responsibility for providing comprehensive analyses on threats, vulnerabilities, and attacks; issuing timely warnings on threats and attacks; facilitating and coordinating the government’s response to cyber incidents; providing law enforcement investigation and response; monitoring reconstitution of minimum required capabilities after an infrastructure attack; and promoting outreach and information sharing. PDD 63 assigns the NIPC responsibility for developing analytical capabilities to provide comprehensive information on changes in threat conditions and newly identified system vulnerabilities as well as timely warnings of potential and actual attacks. This responsibility requires obtaining and analyzing intelligence, law enforcement, and other information to identify patterns that may signal that an attack is underway or imminent. Since its establishment in 1998, the NIPC has issued a variety of analytical products, most of which have been tactical analyses pertaining to individual incidents. These analyses have included (1) situation reports related to law enforcement investigations, including denial-of-service attacks that affected numerous Internet-based entities, such as eBay and Yahoo and (2) analytical support of a counterintelligence investigation. In addition, the NIPC has issued a variety of publications, most of which were compilations of information previously reported by others with some NIPC analysis. Strategic analysis to determine the potential broader implications of individual incidents has been limited. Such analysis looks beyond one specific incident to consider a broader set of incidents or implications that may indicate a potential threat of national importance. Identifying such threats assists in proactively managing risk, including evaluating the risks associated with possible future incidents and effectively mitigating the impact of such incidents. Three factors have hindered the NIPC’s ability to develop strategic analytical capabilities. First, there is no generally accepted methodology for analyzing strategic cyber-based threats. For example, there is no standard terminology, no standard set of factors to consider, and no established thresholds for determining the sophistication of attack techniques. According to officials in the intelligence and national security community, developing such a methodology would require an intense interagency effort and dedication of resources. Second, the NIPC has sustained prolonged leadership vacancies and does not have adequate staff expertise, in part because other federal agencies have not provided the originally anticipated number of detailees. For example, as of the close of our review in February, the position of Chief of the Analysis and Warning Section, which was to be filled by the Central Intelligence Agency, had been vacant for about half of the NIPC’s 3-year existence. In addition, the NIPC had been operating with only 13 of the 24 analysts that NIPC officials estimate are needed to develop analytical capabilities. Third, the NIPC did not have industry-specific data on factors such as critical system components, known vulnerabilities, and interdependencies. Under PDD 63, such information is to be developed for each of eight industry segments by industry representatives and the designated federal lead agencies. However, at the close of our work in February, only three industry assessments had been partially completed, and none had been provided to the NIPC. To provide a warning capability, the NIPC established a Watch and Warning Unit that monitors the Internet and other media 24 hours a day to identify reports of computer-based attacks. As of February, the unit had issued 81 warnings and related products since 1998, many of which were posted on the NIPC’s Internet web site. While some warnings were issued in time to avert damage, most of the warnings, especially those related to viruses, pertained to attacks underway. The NIPC’s ability to issue warnings promptly is impeded because of (1) a lack of a comprehensive governmentwide or nationwide framework for promptly obtaining and analyzing information on imminent attacks, (2) a shortage of skilled staff, (3) the need to ensure that the NIPC does not raise undue alarm for insignificant incidents, and (4) the need to ensure that sensitive information is protected, especially when such information pertains to law enforcement investigations underway. However, I want to emphasize a more fundamental impediment. Specifically, evaluating the NIPC’s progress in developing analysis and warning capabilities is difficult because the federal government’s strategy and related plans for protecting the nation’s critical infrastructures from computer-based attacks, including the NIPC’s role, are still evolving. The entities involved in the government’s critical infrastructure protection efforts have not shared a common interpretation of the NIPC’s roles and responsibilities. Further, the relationships between the NIPC, the FBI, and the National Coordinator for Security, Infrastructure Protection, and Counter-Terrorism at the National Security Council have been unclear regarding who has direct authority for setting NIPC priorities and procedures and providing NIPC oversight. In addition, the NIPC’s own plans for further developing its analytical and warning capabilities were fragmented and incomplete. As a result, there were no specific priorities, milestones, or program performance measures to guide NIPC actions or provide a basis for evaluating its progress. The administration is currently reviewing the federal strategy for critical infrastructure protection that was originally outlined in PDD 63, including provisions related to developing analytical and warning capabilities that are currently assigned to the NIPC. On May 9, the White House issued a statement saying that it was working with federal agencies and private industry to prepare a new version of a “national plan for cyberspace security and critical infrastructure protection” and reviewing how the government is organized to deal with information security issues. In our report, we recommend that, as the administration proceeds, the Assistant to the President for National Security Affairs, in coordination with pertinent executive agencies, establish a capability for strategic analysis of computer-based threats, including developing related methodology, acquiring staff expertise, and obtaining infrastructure data; require development of a comprehensive data collection and analysis framework and ensure that national watch and warning operations for computer-based attacks are supported by sufficient staff and resources; and clearly define the role of the NIPC in relation to other government and private-sector entities. PDD 63 directed the NIPC to provide the principal means of facilitating and coordinating the federal government’s response to computer-based incidents. In response the NIPC undertook efforts in two major areas: providing coordination and technical support to FBI investigations and establishing crisis management capabilities. First, the NIPC provided valuable coordination and technical support to FBI field offices, which established special squads and teams and one regional task force in its field offices to address the growing number of computer crime cases. The NIPC supported these investigative efforts by (1) coordinating investigations among FBI field offices, thereby bringing a national perspective to individual cases, (2) providing technical support in the form of analyses, expert assistance for interviews, and tools for analyzing and mitigating computer-based attacks, and (3) providing administrative support to NIPC field agents. For example, the NIPC produced over 250 written technical reports during 1999 and 2000, developed analytical tools to assist in investigating and mitigating computer-based attacks, and managed the procurement and installation of hardware and software tools for the NIPC field squads and teams. While these efforts benefited investigative efforts, FBI and NIPC officials told us that increased computer capacity and data transmission capabilities would improve their ability to promptly analyze the extremely large amounts of data that are associated with some cases. In addition, FBI field offices were not yet providing the NIPC with the comprehensive information that NIPC officials say is needed to facilitate prompt identification and response to cyber incidents. According to field office officials, some information on unusual or suspicious computer-based activity had not been reported because it did not merit opening a case and was deemed to be insignificant. To address this problem, the NIPC established new performance measures related to reporting. Second, the NIPC developed crisis management capabilities to support a multiagency response to the most serious incidents from the FBI’s Washington, D.C., Strategic Information Operations Center. From 1998 through early 2001, seven crisis action teams had been activated to address potentially serious incidents and events, such as the Melissa virus in 1999 and the days surrounding the transition to the year 2000, and related procedures have been formalized. In addition, the NIPC coordinated development of an emergency law enforcement plan to guide the response of federal, state, and local entities. To help ensure an adequate response to the growing number of computer crimes, we recommend in our report that the Attorney General, the FBI Director, and the NIPC Director take steps to (1) ensure that the NIPC has access to needed computer and communications resources and (2) monitor implementation of new performance measures to ensure that field offices fully report information on potential computer crimes to the NIPC. Information sharing and coordination among private-sector and government organizations are essential for thoroughly understanding cyber threats and quickly identifying and mitigating attacks. However, as we testified in July 2000,establishing the trusted relationships and information-sharing protocols necessary to support such coordination can be difficult. NIPC success in this area has been mixed. For example, the InfraGard Program, which provides the FBI and the NIPC with a means of securely sharing information with individual companies, had grown to about 500 member organizations as of January 2001 and was viewed by the NIPC as an important element in building trust relationships with the private sector. NIPC officials recently told us that InfraGard membership has continued to increase. However, of the four information sharing and analysis centers that had been established as focal points for infrastructure sectors, a two-way, information-sharing partnership with the NIPC had developed with only one—the electric power industry. The NIPC’s dealings with two of the other three centers primarily consisted of providing information to the centers without receiving any in return, and no procedures had been developed for more interactive information sharing. The NIPC’s information-sharing relationship with the fourth center was not covered by our review because the center was not established until mid-January 2001, shortly before the close of our work. Similarly, the NIPC and the FBI have made only limited progress in developing a database of the most important components of the nation’s critical infrastructures—an effort referred to as the Key Asset Initiative. While FBI field offices had identified over 5,000 key assets, at the time of our review, the entities that own or control the assets generally had not been involved in identifying them. As a result, the key assets recorded may not be the ones that infrastructure owners consider to be the most important. Further, the Key Asset Initiative was not being coordinated with other similar federal efforts at the Departments of Defense and Commerce. In addition, the NIPC and other government entities had not developed fully productive information-sharing and cooperative relationships. For example, federal agencies have not routinely reported incident information to the NIPC, at least in part because guidance provided by the federal Chief Information Officers Council, which is chaired by the Office of Management and Budget, directs agencies to report such information to the General Services Administration’s Federal Computer Incident Response Capability. Further, NIPC and Defense officials agreed that their information-sharing procedures needed improvement, noting that protocols for reciprocal exchanges of information had not been established. In addition, the expertise of the U.S. Secret Service regarding computer crime had not been integrated into NIPC efforts. The NIPC has been more successful in providing training on investigating computer crime to government entities, which is an effort that it considers an important component of its outreach efforts. From 1998 through 2000, the NIPC trained about 300 individuals from federal, state, local, and international entities other than the FBI. In addition, the NIPC has advised several foreign governments that are establishing centers similar to the NIPC. To improve information sharing, we recommend in our report that the Assistant to the President for National Security Affairs direct federal agencies and encourage the private sector to better define the types of information necessary and appropriate to exchange in order to combat computer-based attacks and to develop procedures for performing such exchanges, initiate development of a strategy for identifying assets of national significance that includes coordinating efforts already underway, and resolve discrepancies in requirements regarding computer incident reporting by federal agencies. In our report, we also recommend that the Attorney General task the FBI Director to formalize information-sharing relationships between the NIPC and other federal entities and industry sectors and ensure that the Key Asset Initiative is integrated with other similar federal activities.
The National Infrastructure Protection Center (NIPC) is an important element of the U.S.' strategy to protect the nation's infrastructures from hostile attacks, especially computer-based attacks. This testimony discusses the key findings of a GAO report on NIPC's progress in developing national capabilities for analyzing cyber threats and vulnerability data and issuing warnings, enhancing its capabilities for responding to cyber attacks, and establishing information-sharing relationships with governments and private-sector entities. GAO found that progress in developing the analysis, warning, and information-sharing capabilities has been mixed. NIPC began various critical infrastructure protection efforts that have laid the foundation for future governmentwide efforts. NIPC has also provided valuable support and coordination related to investigating and otherwise responding to attacks on computers. However, the analytical and information-sharing capabilities that are needed to protect the nation's critical infrastructures have not yet been achieved, and NIPC has developed only limited warning capabilities. An underlying contributor to the slow progress is that the NIPC's roles and responsibilities have not been fully defined and are not consistently interpreted by other entities involved in the government's broader critical infrastructure protection strategy. This report summarized an April report (GAO-01-323).
You are an expert at summarizing long articles. Proceed to summarize the following text: In fiscal year 2009, the federal government spent over $4 billion specifically to improve the quality of our nation’s 3 million teachers through numerous programs across the government. Teacher quality can be enhanced through a variety of activities, including training, recruitment, and curriculum and assessment tools. In turn, these activities can influence student learning and ultimately improve the global competitiveness of the American workforce in a knowledge-based economy. Federal efforts to improve teacher quality have led to the creation and expansion of a variety of programs across the federal government. However, there is no governmentwide strategy to minimize fragmentation, overlap, or potential duplication among these programs. Specifically, GAO identified 82 distinct programs designed to help improve teacher quality, either as a primary purpose or as an allowable activity, administered across 10 federal agencies. Many of these programs share similar goals. For example, 9 of the 82 programs support improving the quality of teaching in science, technology, engineering, and mathematics (STEM subjects) and these programs alone are administered across the Departments of Education, Defense, and Energy; the National Aeronautics and Space Administration; and the National Science Foundation. Further, in fiscal year 2010, the majority (53) of the programs GAO identified supporting teacher quality improvements received $50 million or less in funding and many have their own separate administrative processes. The proliferation of programs has resulted in fragmentation that can frustrate agency efforts to administer programs in a comprehensive manner, limit the ability to determine which programs are most cost effective, and ultimately increase program costs. For example, eight different Education offices administer over 60 of the federal programs supporting teacher quality improvements, primarily in the form of competitive grants. Education officials believe that federal programs have failed to make significant progress in helping states close achievement gaps between schools serving students from different socioeconomic backgrounds, because, in part, federal programs that focus on teaching and learning of specific subjects are too fragmented to help state and district officials strengthen instruction and increase student achievement in a comprehensive manner. While Education officials noted, and GAO concurs, that a mixture of programs can target services to underserved populations and yield strategic innovations, the current programs are not structured in a way that enables educators and policymakers to identify the most effective practices to replicate. According to Education officials, it is typically not cost-effective to allocate the funds necessary to conduct rigorous evaluations of small programs; therefore, small programs are unlikely to be evaluated. Finally, it is more costly to administer multiple separate federal programs because each program has its own policies, applications, award competitions, reporting requirements, and, in some cases, federal evaluations. While all of the 82 federal programs GAO identified support teacher quality improvement efforts, several overlap in that they share more than one key program characteristic. For example, teacher quality programs may overlap if they share similar objectives, serve similar target groups, or fund similar activities. GAO previously reported that 23 of the programs administered by Education in fiscal year 2009 had improving teacher quality as a specific focus, which suggested that there may be overlap among these and other programs that have teacher quality improvements as an allowable activity. When looking across a broader set of criteria, GAO found that 14 of the programs administered by Education overlapped with another program with regard to allowable activities as well as shared objectives and target groups (see fig. 1). For example, the Transition to Teaching program and Teacher Quality Partnership Grant program can both be used to fund similar teacher preparation activities through institutions of higher education for the purpose of helping individuals from nonteaching fields become qualified to teach. Although there is overlap among these programs, several factors make it difficult to determine whether there is unnecessary duplication. First, when similar teacher quality activities are funded through different programs and delivered by different entities, some overlap can occur unintentionally, but is not necessarily wasteful. For example, a local school district could use funds from the Foreign Language Assistance program to pay for professional development for a teacher who will be implementing a new foreign language course, and this teacher could also attend a summer seminar on best practices for teaching the foreign language at a Language Resource Center. Second, by design, individual teachers may benefit from federally funded training or financial support at different points in their careers. Specifically, the teacher from this example could also receive teacher certification through a program funded by the Teachers for a Competitive Tomorrow program. Further, both broad and narrowly targeted programs exist simultaneously, meaning that the same teacher who receives professional development funded from any one or more of the above three programs might also receive professional development that is funded through Title I, Part A of ESEA. The actual content of these professional development activities may differ though, since the primary goal of each program is different. In this example, it would be difficult to know whether the absence of any one of these programs would make a difference in terms of the teacher’s ability to teach the new language effectively. In addition, our larger body of work on federal education programs has also found a wide array of programs with similar objectives, target populations, and services across multiple federal agencies. This includes a number of efforts to catalogue and determine how much is spent on a wide variety of federally funded education programs. For example: In 2010, we reported that the federal government provided an estimated $166.9 billion over the 3-year period during fiscal years 2006 to 2008 to administer 151 different federal K-12 and early childhood education programs. In 2005, we identified 207 federal education programs that support science, technology, engineering, and mathematics (STEM) administered by 13 federal civilian agencies. In past work, GAO and Education’s Inspector General have concluded that improved planning and coordination could help Education better leverage expertise and limited resources, and to anticipate and develop options for addressing potential problems among the multitude of programs it administers. Generally, GAO has reported that uncoordinated program efforts can waste scarce funds, confuse and frustrate program customers, and limit the overall effectiveness of the federal effort. GAO identified key practices that can help enhance and sustain collaboration among federal agencies which include establishing mutually reinforcing or joint strategies to achieve the outcome; identifying and addressing needs by leveraging resources; agreeing upon agency roles and responsibilities; establishing compatible policies, procedures, and other means to operate across agency boundaries; developing mechanisms to monitor, evaluate, and report on the results of collaborative efforts; reinforcing agency accountability for collaborative efforts through agency plans and reports; and reinforcing individual accountability for collaborative efforts through agency performance management systems. In 2009, GAO recommended that the Secretary of Education work with other agencies as appropriate to develop a coordinated approach for routinely and systematically sharing information that can assist federal programs, states, and local providers in achieving efficient service delivery. Education has established working groups to help develop more effective collaboration across Education offices, and has reached out to other agencies to develop a framework for sharing information on some teacher quality activities, but it has noted that coordination efforts do not always prove useful and cannot fully eliminate barriers to program alignment, such as programs with differing definitions for similar populations of grantees, which create an impediment to coordination. However, given the large number of teacher quality programs and the extent of overlap, it is unlikely that improved coordination alone can fully mitigate the effects of the fragmented and overlapping federal effort. In our work we have identified multiple barriers to collaboration, including the conflicting missions of agencies; challenges reaching consensus on priorities; and incompatible procedures, processes, data, and computer systems. As this Subcommittee considers its annual spending priorities, it may be an opportune time to consider options for addressing fragmentation and overlap among federal teacher quality programs and what is known about how well these programs are achieving their objectives. As you consider options for how to address fragmentation, overlap, and potential duplication, I would like to highlight three approaches for you to consider: 1. enhancing program evaluations and performance information; 2. fostering coordination and strategic planning for program areas that span multiple federal agencies; and 3. consolidating existing programs. Information about the effectiveness of programs can help guide policymakers and program managers in making tough decisions about how to prioritize the use of scarce resources and improve the efficiency of existing programs. However, there can be many challenges to obtaining this information. For example, it may not be cost-effective to allocate the funds necessary to conduct rigorous evaluations of the many small programs and, as a result, these programs are unlikely to be evaluated. As we have reported, many programs, especially smaller programs, have not been evaluated, which can limit the ability of Congress to make informed decisions about which programs to continue, expand, modify, consolidate, or eliminate. For example: In 2009, we also reported that while evaluations have been conducted, or are under way, for about two-fifths of the 23 teacher quality programs we identified, little is known about the extent to which most programs are achieving their desired results. In 2010, GAO reported that there were 151 different federal K-12 and early childhood education programs but that more than half of these programs have not been evaluated, including 8 of the 20 largest programs, which together account for about 90 percent of total funding for these programs. Recognizing the importance of program evaluations, as part of its high priority performance goals in its 2011 budget and performance plan, Education has proposed implementation of a comprehensive approach to inform its policies and major initiatives. Specifically, it has proposed to 1) increase by two-thirds the number of its discretionary programs that use evaluation, performance measures, and other program data, 2) implement rigorous evaluations of its highest priority programs and initiatives, and 3) ensure that newly authorized discretionary programs include a rigorous evaluation component. However, Education has noted that linking performance of specific outcomes to federal education programs is complicated. For example, federal education funds often support state or local efforts, making it difficult to assess the federal contribution to performance of specific outcomes, and it can be difficult to isolate the effect of a single program given the multitude of programs that could potentially affect outcomes. There are also governmentwide strategies that may play an important role. Specifically, in January 2011, the President signed the GPRA Modernization Act of 2010 (GPRAMA), updating the almost two-decades- old Government Performance and Results Act (GPRA). Implementing provisions of the new act—such as its emphasis on establishing outcome- oriented goals covering a limited number of crosscutting policy areas— could play an important role in clarifying desired outcomes and addressing program performance spanning multiple organizations. Specifically, GPRAMA requires (1) disclosure of information about the accuracy and reliability of performance data, (2) identification of crosscutting management challenges, and (3) quarterly reporting on priority goals on a publicly available Web site. Additionally, GPRAMA significantly enhances requirements for agencies to consult with Congress when establishing or adjusting governmentwide and agency goals. The Office of Management and Budget (OMB) and agencies are to consult with relevant committees, obtaining majority and minority views, about proposed goals at least once every 2 years. This information can inform deliberations on spending priorities and help re-examine the fundamental structure, operation, funding, and performance of a number of federal education programs. However, to be successful, it will be important for agencies to build the analytical capacity to both use the performance information, and to ensure its quality—both in terms of staff trained to do the analysis and availability of research and evaluation resources. Where programs cross federal agencies, Congress can establish requirements to ensure federal agencies are working together on common goals. For example, Congress mandated—through the America COMPETES Reauthorization Act of 2007—that the Office of Science and Technology Policy develop and maintain an inventory of STEM education programs including documentation of the effectiveness of these programs, assess the potential overlap and duplication of these programs, determine the extent of evaluations, and develop a 5-year strategic plan for STEM education, among other things. In establishing these requirements, Congress put in place a set of requirements to provide information to inform its decisions about strategic priorities. Consolidating existing programs is another option for Congress to address fragmentation, overlap, and duplication. In the education area, Congress consolidated several bilingual education programs into the English Language Acquisition State Grant Program as part of the 2001 ESEA reauthorization. As we reported prior to the consolidation, existing bilingual programs shared the same goals, targeted the same types of children, and provided similar services. In consolidating these programs, Congress gave state and local educational agencies greater flexibility in the design and administration of language instructional programs. Congress has another opportunity to address these issues through the pending reauthorization of the ESEA. Specifically, to minimize any wasteful fragmentation and overlap among teacher quality programs, Congress may choose either to eliminate programs that are too small to evaluate cost effectively or to combine programs serving similar target groups into a larger program. Education has already proposed combining 38 programs into 11 programs in its reauthorization proposal, which could allow the agency to dedicate a higher portion of its administrative resources to monitoring programs for results and providing technical assistance. Congress might also include legislative provisions to help Education reduce fragmentation, such as by giving broader discretion to the agency to move resources away from certain programs. Congress could provide Education guidelines for selecting these programs. For example, Congress could allow Education discretion to consolidate programs with administrative costs exceeding a certain threshold or programs that fail to meet performance goals, into larger or more successful programs. Finally, to the extent that overlapping programs continue to be authorized, they could be better aligned with each other in a way that allows for comparison and evaluation to ensure they are complementary rather than duplicative. In conclusion, removing and preventing unnecessary duplication, overlap, and fragmentation among federal teacher quality programs is clearly challenging. These are difficult issues to address because they may require agencies and Congress to re-examine within and across various mission areas the fundamental structure, operation, funding, and performance of a number of long-standing federal programs or activities. Implementing provisions of GPRAMA—such as its emphasis on establishing priority outcome-oriented goals, including those covering crosscutting policy areas—could play an important role in clarifying desired outcomes, addressing program performance spanning multiple agencies, and facilitating future actions to reduce unnecessary duplication, overlap, and fragmentation. Further, by ensuring that Education conducts rigorous evaluations of key programs Congress could obtain additional information on program performance to better inform its decisions on spending priorities. Sustained attention and oversight by Congress will also be critical. Thank you, Chairman Rehberg, Ranking Member DeLauro, and Members of the Subcommittee. This concludes my prepared statement. I would be pleased to answer any questions you may have. For further information on this testimony please contact George A. Scott, Director, Education, Workforce, and Income Security, who may be reached at (202) 512-7215, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs offices may be found on the last page of this statement. This statement will be available at no charge on the GAO Web site at http://www.gao.gov. Opportunities to Reduce Fragmentation, Overlap, and Potential Duplication in Federal Teacher Quality and Employment and Training Programs. GAO-11-509T. Washington, D.C.: April 6, 2011. List of Selected Federal Programs That Have Similar or Overlapping Objectives, Provide Similar Services, or Are Fragmented Across Government Missions. GAO-11-474R. Washington, D.C.: March 18, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-441T. Washington, D.C.: March 3, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. Department of Education: Improved Oversight and Controls Could Help Education Better Respond to Evolving Priorities. GAO-11-194. Washington, D.C.: February 10, 2011. Federal Education Funding: Overview of K-12 and Early Childhood Education Programs. GAO-10-51. Washington, D.C.: January 27, 2010. English Language Learning: Diverse Federal and State Efforts to Support Adult English Language Learning Could Benefit from More Coordination. GAO-09-575. Washington: D.C.: July 29, 2009. Teacher Preparation: Multiple Federal Education Offices Support Teacher Preparation for Instructing Students with Disabilities and English Language Learners, but Systematic Departmentwide Coordination Could Enhance This Assistance. GAO-09-573. Washington, D.C.: July 20, 2009. Teacher Quality: Sustained Coordination among Key Federal Education Programs Could Enhance State Efforts to Improve Teacher Quality. GAO-09-593. Washington, D.C.: July 6, 2009. Teacher Quality: Approaches, Implementation, and Evaluation of Key Federal Efforts. GAO-07-861T. Washington, D.C.: May 17, 2007. Higher Education: Science, Technology, Engineering, and Mathematics Trends and the Role of Federal Programs. GAO-06-702T. Washington: May 3, 2006. Higher Education: Federal Science, Technology, Engineering, and Mathematics Programs and Related Trends. GAO-06-114. Washington, D.C.: October 12, 2005. Special Education: Additional Assistance and Better Coordination Needed among Education Offices to Help States Meet the NCLBA Teacher Requirements. GAO-04-659. Washington, D.C.: July 15, 2004. Special Education: Grant Programs Designed to Serve Children Ages 0- 5. GAO-02-394. Washington, D.C.: April 25, 2002. Head Start and Even Start: Greater Collaboration Needed on Measures of Adult Education and Literacy. GAO-02-348. Washington, D.C.: March 29, 2002. Bilingual Education: Four Overlapping Programs Could Be Consolidated. GAO-01-657. Washington, D.C.: May 14, 2001. Early Education and Care: Overlap Indicates Need to Assess Crosscutting Programs. GAO/HEHS-00-78. Washington, D.C.: April 28, 2000. Education and Care: Early Childhood Programs and Services for Low- Income Families. GAO/HEHS-00-11. Washington: D.C.: November 15, 1999. Federal Education Funding: Multiple Programs and Lack of Data Raise Efficiency and Effectiveness Concerns. GAO/T-HEHS-98-46. Washington, D.C.: November 6, 1997. Multiple Teacher Training Programs: Information on Budgets, Services, and Target Groups. GAO/HEHS-95-71FS. Washington, D.C.: February 22, 1995. Early Childhood Programs: Multiple Programs and Overlapping Target Groups. GAO/HEHS-95-4FS. Washington, D.C.: October 31, 1994. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the findings from our recent work on fragmentation, overlap, and potential duplication in federally funded programs that support teacher quality. We recently issued a report addressing fragmentation, overlap, and potential duplication in federal programs that outlined opportunities to reduce potential duplication across a wide range of federal programs, including teacher quality programs. Our recent work on teacher quality programs builds on a long history of work where we identified a number of education programs with similar goals, beneficiaries, and allowable activities that are administered by multiple federal agencies. This work may help inform congressional deliberations over how to prioritize spending given the rapidly building fiscal pressures facing our nation's government. In recent years, the Department of Education (Education) has faced expanded responsibilities that have challenged the department to strategically allocate resources to balance new duties with ongoing ones. For example, we reported the number of grants Education awarded increased from about 14,000 in 2000 to about 21,000 just 2 years later and has since remained around 18,000, even as the number of full-time equivalent staff decreased by 13 percent from fiscal years 2000 to 2009. New programs often increase Education's workload, requiring staff to develop new guidance and provide technical assistance to program participants. Our work examining fragmentation, overlap, and potential duplication can help inform decisions on how to prioritize spending, which could also help Education address these challenges and better allocate scarce resources. In particular, our recent work identified 82 programs supporting teacher quality, which are characterized by fragmentation and overlap. Fragmentation of programs exists when programs serve the same broad area of national need but are administered across different federal agencies or offices. Program overlap exists when multiple agencies or programs have similar goals, engage in similar activities or strategies to achieve them, or target similar beneficiaries. Overlap and fragmentation among government programs or activities can be harbingers of unnecessary duplication. Given the challenges associated with fragmentation, overlap, and potential duplication, careful, thoughtful actions will be needed to address these issues. This testimony draws upon the results of our recently issued report and our past work and addresses (1) what is known about fragmentation, overlap, and potential duplication among teacher quality programs; and (2) what are additional ways that Congress could minimize fragmentation, overlap, and duplication among these programs? We identified 82 distinct programs designed to help improve teacher quality administered across 10 federal agencies, many of which share similar goals. However, there is no governmentwide strategy to minimize fragmentation, overlap, or potential duplication among these programs. The fragmentation and overlap of teacher quality programs can frustrate agency efforts to administer programs in a comprehensive manner, limit the ability to determine which programs are most cost effective, and ultimately increase program costs. In addition, our larger body of work on federal education programs has also found a wide array of programs with similar objectives, target populations, and services across multiple federal agencies. In past work, GAO and Education's Inspector General have concluded that improved planning and coordination could help Education better leverage expertise and limited resources; however, given the large number of teacher quality programs and the extent of overlap, it is unlikely that improved coordination alone can fully mitigate the effects of the fragmented and overlapping federal effort. Sustained congressional oversight can also play a key role in addressing these issues. Congress could address these issues through legislation, particularly through the pending reauthorization of the Elementary and Secondary Education Act of 1965 (ESEA), and Education has already proposed combining 38 programs into 11 programs in its reauthorization and fiscal year 2012 budget proposals. Further, actions taken by Congress in the past demonstrate ways this Subcommittee can address these issues. However, effective oversight may be challenging as many of the programs we identified, especially smaller programs, have not been evaluated.
You are an expert at summarizing long articles. Proceed to summarize the following text: Antimicrobial drugs are a broad class of drugs that combat many pathogens, including bacteria, viruses, fungi, or parasites. Antibiotics are a subset of these drugs that work against bacteria. Antibiotics work by killing the bacteria directly or halting their growth. According to WHO, the evolution of strains of bacteria that are resistant to antibiotics is a natural phenomenon that occurs when microorganisms exchange resistant traits; however, WHO also states that the use and misuse of antimicrobial drugs, including antibiotics, accelerates the emergence of resistant strains. Antibiotic resistance began to be recognized soon after penicillin, one of the first antibiotics, came into use over 70 years ago. Antibiotic- resistant bacteria can spread from animals and cause disease in humans through a number of pathways (see fig. 1). The use of antibiotics in animals is an integral part of food animal production. To improve efficiencies, modern industrial farms raise animals in high concentrations, but this practice has the potential to spread disease because animals live in close confinement. Long-term, low-dose treatments with antibiotics may help prevent diseases, particularly where animals are housed in large groups in close confinement facilities, such as concentrated animal feed operations. The concentrated nature of such agricultural operations means that a disease, if it occurs, can spread rapidly and become quickly devastating—increasing the need to rely on antibiotics as a preventive measure. The purposes for which FDA approves the use of antibiotics can be divided into four categories: to treat animals that exhibit clinical signs of disease; to control a disease in a group of animals when a proportion of them exhibit clinical signs of disease; to prevent disease in a group of animals when none are exhibiting clinical signs of disease, but disease is likely to occur in the absence of an antibiotic; or to promote faster weight gain (growth promotion) or weight gain with less feed (feed efficiency). Antibiotics for food animals are administered either by mixing them into feed or water, or by injection and other routes. For example, according to representatives from the poultry industry, the majority of antibiotics used in poultry production are administered through feed and water. In lactating dairy cattle, mastitis—an inflammation of the udder—is the most common reason for antibiotic use and antibiotics are given by injection either to treat or prevent disease, according to representatives from the National Milk Producers Federation. Antibiotics for food animals may be sold or dispensed in several ways, with varying levels of restriction. Some antibiotics may be purchased over-the-counter and used by producers without veterinarian consultation or oversight. Certain antibiotics added to feed must be accompanied by a veterinary feed directive, a type of order for this use. The directive authorizes the producer to obtain and use animal feed containing a certain drug or drug combination to treat the producer’s animals in accordance with the conditions for use approved by FDA. Some antibiotics may require a prescription from a licensed veterinarian. Although veterinarians may prescribe most approved drugs “extra label” (for a species or indication other than those on the drug label), restrictions on the extra-label use of antibiotics in food animals exist. For example, no extra-label use of approved drugs, including antibiotics, is legally permissible in or on animal feed, according to FDA officials. Certain types of drugs, including some types of antibiotics, are prohibited from extra- label use in food animals under any circumstances because the use of these drugs may lead to antibiotic resistance in humans (e.g., fluoroquinolones—broad-spectrum antibiotics that play an important role in treatment of serious bacterial infections, such as hospital-acquired infections). Antibiotics used for food animals can be the same, or belong to the same drug classes, as those used in human medicine. FDA and WHO have sought to identify antibiotics that are used in both animals and humans and that are important to treat human infections—such antibiotics are known as medically important antibiotics. In 2003, FDA issued guidance to industry on the use of antibiotics in food animals, which included a list of antibiotics that it considers important to human medicine. In this guidance, FDA ranked each antibiotic according to its importance in human medicine, as “critically important” (the highest ranking), “highly important,” or “important” based on criteria that focused on antimicrobials, including antibiotics, used to treat foodborne illness in humans. Similarly, WHO developed criteria for ranking antimicrobials, including antibiotics, according to their importance in human medicine and first ranked them in 2005. Two federal departments are primarily responsible for ensuring the safety of the U.S. food supply, including the safe use of antibiotics in food animals—HHS and USDA. Each department contains multiple agencies that contribute to the national effort to control, monitor, and educate others on antibiotic use and resistance. For example, HHS’s CDC and FDA as well as USDA’s APHIS and FSIS have responsibilities related to the White House’s 2015 National Action Plan for Combating Antibiotic- Resistant Bacteria. The plan identifies several goals, including a goal to slow the development of resistant bacteria and prevent the spread of resistant infections as well as a goal to strengthen national “one-health” surveillance efforts to combat resistance, which include collecting data on antibiotic use and resistance. The “one-health” concept recognizes that the health of humans, animals, and the environment are interconnected. Table 1 provides information on selected agencies’ efforts related to antibiotic resistance. To help ensure public health and the safety of the food supply, HHS’s CDC leads investigations of multi-state foodborne illness outbreaks, including those involving antibiotic-resistant pathogens, and collaborates with USDA, FDA, and state public health partners in this effort. To identify an outbreak, CDC monitors data voluntarily reported from state health departments on cases of laboratory-confirmed illness and analyzes these data to identify elevated rates of disease that may indicate an outbreak, according to CDC officials. According to CDC’s website, determining the food source of human illness is an important part of improving food safety. In general, foods often associated with foodborne illnesses include raw foods of animal origin—meat, poultry, eggs, and shellfish, and also unpasteurized (raw) milk—that can cause infections if undercooked or through cross-contamination. Since 2011, HHS has increased veterinary oversight of antibiotics in food animals and, along with USDA, collected additional data on antibiotic use and resistance, but gaps exist in oversight and data collection, and the impact of the agencies’ efforts is unknown. For medically important antibiotics administered in animal feed and water, HHS’s FDA increased veterinary oversight and prohibited certain uses through a combination of guidance and regulation. In addition, agencies in HHS and USDA made several improvements in collecting and reporting data on antibiotic sales, resistance, and use. However, the agencies’ actions do not address oversight gaps such as long-term and open-ended use of medically important antibiotics for disease prevention or collection of farm-specific data, and FDA and APHIS do not have measures to assess the impact of their actions. To promote the judicious use of antibiotics in food animals, FDA increased veterinary oversight of medically important antibiotics in feed and water through voluntary guidance to industry and revising the veterinary feed directive regulation. As a result, as of January 2017, medically important antimicrobials, including antibiotics, in the feed and water of food animals may only be used under the supervision of licensed veterinarians, according to FDA officials (see app. II for a list of these drugs). Voluntary Guidance to Industry. In 2012, FDA finalized guidance that lays out a strategy for phasing out the use of medically important antibiotics for growth promotion or feed efficiency, and for bringing other uses under veterinary oversight. Specifically, in Guidance for Industry #209, FDA outlined and recommended adoption of two principles for judicious use of antibiotics in food animals: (1) limit medically important antibiotics to uses that are considered necessary for assuring animal health, such as to prevent, control, and treat diseases, and (2) limit antibiotic uses to those that include veterinary oversight. In 2013, to help ensure implementation of its strategy, FDA issued Guidance for Industry #213, which asked animal drug companies to voluntarily stop labeling antibiotics for growth promotion or feed efficiency within 3 years. The guidance also recommended more veterinary oversight. Specifically, FDA (1) asked drug companies to voluntarily revise labels of medically important antibiotics to remove the use for growth promotion and feed efficiency; (2) outlined procedures for adding, where appropriate, scientifically supported uses for disease treatment, control, or prevention; and (3) recommended that companies change the means of sale or dispensation from over-the-counter to require veterinary oversight—either through a veterinary feed directive for antimicrobials administered through feed or through a prescription for antimicrobials administered through water—by December 31, 2016. According to FDA, as of January 3, 2017, all applications for medically important antimicrobials, including antibiotics, for use in the feed or water for food animals have been aligned with the judicious use principles as recommended in Guidance for Industry #213, or their approvals have been voluntarily withdrawn. As a result of these actions, these products cannot be used for production purposes (e.g., growth promotion) and may only be used under the authorization of a licensed veterinarian, according to FDA. Agencies Respond to Colistin Resistance In May 2016, the U.S. Department of Defense identified the first person in the United States to be carrying E.coli bacteria with a gene that makes bacteria resistant to colistin. The U.S. Department of Agriculture (USDA) also found colistin-resistant E. coli in samples collected from the intestines of two pigs. According to the U.S. Department of Health and Human Services (HHS), these discoveries are of concern because colistin is used as a last- resort drug to treat patients with multidrug- resistant infections. Finding colistin-resistant bacteria in the United States is important because in 2015 scientists in China first reported that colistin resistance can be transferred across bacteria via a specific gene. HHS and USDA are continuing to search for evidence of colistin-resistant bacteria in the United States through the National Antimicrobial Resistance Monitoring System, according to the HHS website. According to officials from HHS’s Centers for Disease Control and Prevention, the agency is also expanding the capability of public health laboratories to conduct surveillance. Guidance for Industry #213 further defined medically important antimicrobials, including antibiotics, as those listed in FDA’s ranking of drug classes and class-specific products based on importance to human medicine. According to FDA officials, the agency plans to update this list in the near future, and the update will address whether to add or remove drug classes and class-specific products, as well as the need to update the relative rankings of these drug classes and class-specific products. Colistin—an antibiotic used as the last line of medical treatment for certain infections—is not listed in the ranking of drugs and drug classes. However, according to FDA officials, the ranking of a closely related drug (polymixin B) covers colistin’s relative importance to human medicine and colistin has never been marketed for use in animals in the United States. Veterinary Feed Directive Final Rule. In light of the 2013 guidance asking animal drug companies to change the labels of medically important antibiotics to bring them under veterinary oversight (Guidance for Industry #213), in June 2015, FDA issued a final rule revising its existing veterinary feed directive regulation to define minimum requirements for a valid veterinarian-client-patient relationship, among other things. The final rule requires a licensed veterinarian to issue the directive in the context of a valid veterinarian-client-patient relationship as defined by the state where the veterinarian practices medicine or by the federal standard in the absence of an appropriate state standard that applies to veterinary feed directive drugs. There are three key elements of the veterinarian-client-patient relationship: (1) the veterinarian engages with the client (e.g., animal producer) to assume responsibility for making clinical judgments about animal health, (2) the veterinarian has sufficient knowledge of the animal by virtue of an examination and visits to the facility (e.g., farm) where the animal is managed, and (3) the veterinarian provides for any necessary follow-up evaluation or care. The veterinarian is also responsible for ensuring the directive is complete and accurate. For example, the directive must include the approximate number of animals to be fed the medicated feed. The final rule also (1) established a 6-month expiration date for directives unless an expiration date shorter than 6 months is specified in the drug’s approved labeling; (2) limited refills to those listed on the product’s label; and (3) established a 2-year recordkeeping requirement for producers, veterinarians, and feed distributors. Since 2011, agencies within HHS and USDA have made several improvements in collecting and reporting data on antibiotic sales, resistance, and use. In 2014, FDA enhanced its annual summary report on antimicrobials sold or distributed for use in food animals. The enhanced annual report includes additional data tables on the importance of each drug class in human medicine; the approved routes of administration for antibiotics; whether antibiotics are available over-the-counter or require veterinary oversight; and whether the drug products are approved for therapeutic (disease prevention, control, or treatment) purposes, production purposes (e.g., growth promotion), or both therapeutic and production purposes. In 2016, FDA finalized a rule requiring drug companies to report sales and distribution of antimicrobials, including medically important antibiotics approved for use in specific food animals (cattle, swine, and poultry— chickens and turkeys) based on an estimated percentage of total annual sales. According to FDA documents, the additional data will improve FDA’s understanding of how antibiotics are sold or distributed for use in food animals and help the agency further target its efforts to ensure judicious use of medically important antibiotics. Before the rule was finalized, however, some organizations cautioned that the proposed requirement for drug companies to submit species-specific estimates of antibiotic product sales and distribution for use in food animal species would not result in useful data, in part, because sales are not a proxy for antibiotic use. FDA’s action partially addressed our 2011 recommendation to provide sales data by food animal group and indication for use. Federal agencies have made several improvements to the National Antimicrobial Resistance Monitoring System—the national public health surveillance system that tracks changes in the antibiotic susceptibility of bacteria found in ill people, retail meats, and food animals. Specifically, beginning in 2013, FSIS collected random samples from animal intestines at slaughter plants, including chickens, turkeys, swine, and cattle, in addition to non-random sampling under its regulatory program. In 2013, FDA also expanded its retail meat sampling to collect data from laboratories in three new states: Louisiana, Missouri, and Washington. This increased the number of states from 11 to 14. In addition, FDA increased retail meat samples from 6,700 in 2015 to 13,400 in 2016 by requiring the 14 participating laboratories to double the amount of food samples purchased and tested. In 2017, FDA plans to add another five states (Iowa, Kansas, South Carolina, South Dakota, and Texas) to retail meat testing, which will raise the total retail meat samples to more than 17,000 annually, according to FDA officials. FSIS and FDA actions addressed our recommendation from 2011 to modify slaughter and retail meat sampling to make the data more representative of antibiotic resistance in bacteria in food animals and retail meat throughout the United States. Figure 2 summarizes the data collected through the National Antimicrobial Resistance Monitoring System. Since 2011, FDA in collaboration with USDA’s Agricultural Research Service has also initiated pilot projects to explore antibiotic-resistant bacteria on the farm and at slaughter for each major food animal group (swine, beef and dairy cattle, chickens, and turkeys). The purpose of the pilot projects was (1) to begin assessing similarities and differences between bacteria and antibiotic resistance on the farm and at the slaughter plant and (2) to determine the feasibility and value of surveillance on farms as a possible new element of the National Antimicrobial Resistance Monitoring System, including the collection of antibiotic use information from farms in a confidential manner. To collect data from farms, federal agencies collaborated with academia to obtain data from producers. According to FDA officials, USDA can use information from the pilot projects to determine options for examining antibiotic resistance in a group of food animals over time (e.g., longitudinal on-farm studies). In 2016, for the first time, CDC, FDA, and USDA published the National Antimicrobial Resistance Monitoring System report with data from whole genome sequencing—cutting-edge technology which characterizes an organism’s (individual bacterium) complete set of genes. According to FDA officials, this represents a very significant advancement in surveillance that will provide definitive information about the genes causing resistance, including resistance compounds not currently fingerprinted, along with details on other important features of a bacterium. In addition, new reporting tools are being deployed to foster timely data sharing via web tools and they allow stakeholders to explore isolate-level antibiotic-resistance data in new ways. For example, in August 2015, FDA made available on its website 18 years of National Antimicrobial Resistance Monitoring System isolate-level data on bacteria. Since 2011, USDA agencies have collected additional antibiotic use data through national surveys of producers and engaged in efforts to leverage industry data. In particular, APHIS, through the National Animal Health Monitoring System, collected additional antibiotic use data through its national survey of producers of dairy cattle (2011 and 2014), beef cattle (2011), laying hens (2013), and swine (2012). Using these surveys, generally APHIS collects information on the amount and duration of antibiotic use; reason for use; antibiotic name; and the route of administration, such as feed, water, and injection; among other things. APHIS also may collect biological samples from animals and test these samples for antibiotic resistance of foodborne pathogens; producers receive results of biological sample testing. According to APHIS officials, the agency is planning to collect data annually on antibiotic use on swine farms and beef cattle feedlots using similar surveys, with additional questions on stewardship and judicious use of antibiotics. USDA’s Economic Research Service and National Agricultural Statistics Service also conducted national surveys of producers of swine (2015) and chicken (2011) to collect data on farm finances and production practices, including antibiotic use. The surveys were components of the annual Agricultural Resource Management Survey, which is primarily focused on farm finances, commodity costs of production, and farm production practices. The surveys captured quantitative information on the extent of antibiotic use and the types of farms that use antibiotics for growth promotion and prevention. USDA has used these data to estimate the impact of antibiotic use on production outcomes. Furthermore, APHIS provided input on a survey as part of the poultry industry effort begun in 2015 to develop a survey to collect farm-specific data. Representatives from the poultry industry told us that they plan to share aggregated survey data with APHIS and FDA when the data collection and report are finalized. Despite agencies’ enhanced oversight and data collection efforts, several gaps exist in the oversight of medically important antibiotics in food animals—specifically, antibiotics with no defined duration of use on their labels and antibiotics administered by routes other than feed and water (e.g., injection). Moreover, gaps that we identified in 2011 in farm-specific data on antibiotic use and resistance in bacteria persist. FDA’s guidance to industry has improved oversight of some antibiotics, but it does not address long-term and open-ended use of medically important antibiotics for disease prevention because some antibiotics do not have defined durations of use on their labels. For example, some currently approved labels do not have defined duration of use such as “feed continuously for 5 days”; instead labels may read “feed continuously,” according to FDA officials. In September 2016, FDA issued a notice in the Federal Register seeking public comment on how to establish appropriately targeted durations of use for medically important antimicrobial drugs including the approximately 32 percent of therapeutic antibiotic products affected by Guidance for Industry #213 with no defined duration of use. FDA officials told us the agency will consider public comments as it develops a process for animal drug companies to establish appropriate durations of use for labels already in use. However, FDA has yet to develop this process, including time frames for implementation. In an October 2016 report, one stakeholder organization recommended that FDA announce a plan and timeline for making all label revision changes regarding duration limits and other aspects of appropriate use as quickly as possible to ensure labels follow the judicious use of antibiotics in food animals. Under federal standards for internal control, management should define objectives clearly to enable the identification of risk and define risk tolerances; for example, in defining objectives, management may clearly define what is to be achieved, who is to achieve it, how it will be achieved, and the time frames for achievement. Without developing a process, which may include time frames, to establish appropriate durations of use on labels of all medically important antibiotics, FDA will not know whether it is achieving its objective of ensuring judicious use of medically important antibiotics in food animals. FDA’s Guidance for Industry #213 also does not recommend veterinary oversight of over-the-counter medically important antibiotics administered in injections or through other routes besides feed and water (e.g., tablets). According to FDA officials, the agency focused first on antibiotics administered in feed and water because officials believed these antibiotics represent the majority of antibiotics sold and distributed and therefore they posed a higher risk to human health. According to FDA’s 2014 sales data report on antimicrobials, approximately 5 percent of medically important antibiotics are sold for use in other routes. Representatives of two veterinary organizations we interviewed support veterinary oversight of medically important antibiotics administered by other routes such as injections. In October 2016, FDA officials told us the agency is developing a plan that outlines its key activities over the next 5 years to further support antimicrobial stewardship in veterinary settings, including addressing veterinary oversight of other dosage forms of medically important antibiotics. According to FDA officials, the agency intended to publish the plan by the end of 2016 and to initiate steps by the end of fiscal year 2019. However, FDA was unable to provide us with this plan or specifics about the steps outlined in the plan because it was still under development. In the interim, on January 3, 2017, FDA broadly outlined on its website its key initiatives to support antimicrobial stewardship in veterinary settings, but it does not provide enough detail to know if steps will be established to increase veterinary oversight of medically important antibiotics administered in routes other than feed and water. As previously discussed, under federal standards for internal control, management should define objectives clearly to enable the identification of risk and define risks tolerances; for example, in defining objectives, management may clearly define what is to be achieved and the time frames for achievement, among other things. Without a published plan documenting the steps to increase veterinary oversight of medically important antibiotics administered through routes other than feed and water, such as injections and tablets, FDA will not know whether it is making progress in achieving its objective of ensuring judicious use of medically important antibiotics in food animals. Stakeholders we spoke with also identified and reported other potential gaps in FDA’s actions to increase veterinary oversight, such as (1) gaps in oversight of antibiotics used for disease prevention and (2) gaps in some producers’ knowledge of FDA’s actions and in their access to veterinarians. Representatives of consumer advocacy organizations told us the use of antibiotics for disease prevention in food animals is a problem because it promotes the routine use of antibiotics in healthy food animals. According to FDA documents, the agency believes that the use of antibiotics for disease prevention is necessary to assure the health of food animals and that such use should be appropriately targeted to animals at risk for a specific disease. Some producers and companies have already taken steps to eliminate the use of medically important antibiotics in food animals, including uses for disease prevention. For example, we interviewed representatives from companies (restaurant and producers) that sell meat and poultry products with “no antibiotic use” label claims, denoting products from animals raised without the use of any antibiotics or medically important antibiotics, even for disease prevention (see app. III for more information on companies’ efforts). In 2016, the Farm Foundation summarized findings from 12 workshops on FDA’s actions and one of the findings was that small- and medium-sized producers did not have sufficient knowledge about FDA’s actions to increase veterinary oversight of medically important antibiotics. In addition, some producers may lack access to veterinarians. In 2015, FDA announced the availability of a guidance document in the form of answers to questions about veterinary feed directive final rule implementation to help small businesses, including producers, comply with the revised regulation. According to FDA officials, the agency continues to respond to questions from stakeholders regarding the use of medically important antimicrobials, including antibiotics, in food animals and has planned numerous outreach activities in 2017. Gaps in farm-specific data on antibiotic use and resistance in food animals persist since we last reported on this in 2011. Agencies are making efforts to address these gaps, but they are doing so without a joint plan, as we previously recommended. A joint plan is necessary to further assess the relationship between antibiotic use and resistance in bacteria, and it could help ensure efficient use of resources in a time of budget constraints. In 2004 and 2011, we found numerous gaps in farm-specific data stemming from limitations in the data collected by the agencies. In this review, we found that the limitations we identified in 2011 remain, and that data gaps have not been fully addressed. For example, according to CDC officials, there are still critical gaps in antibiotic use data, including the amount and specific types of antibiotics used across the various food animals and the indications for their use; these data are needed to further assess the relationship between antibiotic use and resistance in bacteria. Moreover, these data are important for assessing the impact of actions being implemented by FDA to foster the judicious use of medically important antimicrobial drugs, including the use of antibiotics in food animals, according to FDA officials. Table 2 shows limitations in federal efforts to collect farm-specific data on antibiotic use and resistance in bacteria in food animals. HHS and USDA are making individual efforts to gather additional data on antibiotic use and resistance at the farm level, but officials stated that they face funding constraints. For example, in 2014, APHIS proposed initiatives as part of USDA’s plan to improve collection of antibiotic use and resistance data on farms, including enhancements to two on-farm surveys and the initiation of longitudinal on-farm studies to collect data across time on antibiotic use, antibiotic resistance in bacteria, and management practices. According to USDA’s fiscal year 2016 budget summary and annual performance plan, the President’s budget included a $10 million increase for APHIS’ contribution to the government-wide initiative to address antimicrobial resistance. APHIS would have used the increased funding to implement the farm-specific data collection initiatives, according to APHIS officials. However, according to USDA’s Office of Inspector General, the funding was not approved. As noted above, in 2016 APHIS developed study designs for the two proposed on- farm surveys for antibiotic use on cattle feedlots and at swine operations, but the agency has not collected data because, according to USDA, additional funding has not been secured. In March 2016, USDA’s Office of Inspector General found inadequate collaboration in USDA’s budget process to request funds for antibiotic resistance efforts and recommended that the Agricultural Research Service, FSIS, and APHIS work together to establish antibiotic resistance priorities related to budget requirements that also communicate agency interdependency. Subsequently, APHIS collaborated with FSIS and the Agricultural Research Service in developing its fiscal year 2017 budget request to increase the likelihood of receiving funding. Similarly, according to the fiscal year 2016 HHS’s FDA justification of estimates for appropriations committees, the President requested a funding increase of $7.1 million for FDA to achieve its antibiotic stewardship goals, including collection of data related to the use of antibiotics in food animals. According to the Presidential Advisory Council on Combating Antibiotic-Resistant Bacteria, however, FDA did not receive those funds. According to FDA, using existing fiscal year 2016 funds, in March 2016, the agency made some progress in data collection and issued a request for proposals to collect antibiotic use and resistance data on farms. In August 2016, FDA entered into two cooperative agreements with researchers for antibiotic use and resistance data collection; the awardees will develop a methodology to collect detailed information on antibiotic drug use in one or more of the major food animal groups (cattle, swine, chickens, and turkeys), according to FDA officials. The data collection efforts are expected to provide important information on data collection methodologies to help optimize long-term strategies for collecting and reporting such data, according to FDA officials. Moreover, FDA, CDC, and USDA formed a working group and proposed an analytic framework to associate foodborne bacteria resistance with antibiotic use in food animals. However, the agencies are conducting these efforts without a joint data collection plan, thus risking inefficient use of their limited resources. In 2004, we recommended that HHS and USDA jointly develop and implement a plan for collecting data on antibiotic use in food animals. In addition, in 2011, we recommended that HHS and USDA identify potential approaches for collecting detailed data on antibiotic use in food animals, collaborate with industry to select the best approach, seek any resources necessary to implement the approach, and use the data to assess the effectiveness of policies to curb antibiotic resistance. HHS and USDA generally agreed with our recommendations but have still not developed a joint plan or selected the best approach for collecting these data. HHS and USDA officials told us they are continuing to make progress towards developing a joint data collection plan but that funding has been an impediment. In September 2015, FDA, CDC, and USDA agencies, including APHIS, held a jointly sponsored public meeting to present current data collection efforts and obtain public input on possible approaches for collecting additional farm-specific antibiotic use and resistance data. In June 2016, FDA stated that it is collaborating with USDA and CDC to develop the data collection plan and is still reviewing September 2015 public comments on data collection; however, the continued lack of funding will significantly impact the ability to move forward with a plan, according to FDA, APHIS, and CDC officials. The White House’s 2015 National Action Plan for Combating Antibiotic- Resistant Bacteria calls for agencies to strengthen one-health surveillance through enhanced monitoring of antibiotic-resistance patterns, as well as antibiotic sales, usage, and management practices, at multiple points in the production chain for food animals and retail meat. Moreover, in the 1-year update on the National Action Plan, the President’s task force recommended that federal agencies coordinate with each other to ensure maximum synergy, avoidance of duplication, and coverage of key issues. It is unclear whether FDA, CDC, and APHIS will develop a joint plan to collect antibiotic use and resistance data at the farm level and whether agencies’ individual current data collection efforts are coordinated to ensure the best use of resources. We continue to believe that developing a joint plan for collecting data to further assess the relationship between antibiotic use and resistance in bacteria at the farm level is essential and will help maximize resources and reduce the risk of duplicating efforts at a time when resources are constrained. FSIS has developed a performance measure to assess the impact of its actions to manage the use of antibiotics in food animals, but FDA and APHIS have not done so. The GPRA Modernization Act of 2010 requires federal agencies such as HHS and USDA to develop and report performance information—specifically, performance goals, measures, milestones, and planned actions. We have previously found that these requirements can also serve as leading practices for planning at lower levels (e.g., FDA and APHIS) within agencies; moreover, developing goals and performance measures can help an organization balance competing priorities, particularly if resources are constrained, and help an agency assess progress toward intended results. Numerical targets are a key attribute of performance measures because they allow managers to compare planned performance with actual results. In this context, FSIS’s performance measure, included in its fiscal year 2017-2021 strategic plan, relates to sampling of antibiotic-resistant bacteria. Specifically, the performance measure is the percentage of FSIS slaughter meat and poultry samples that will undergo whole genome sequencing, including antibiotic-resistance testing, to assess the impact of the agency’s surveillance of antibiotic-resistant bacteria in slaughtered food animals. FDA and APHIS officials agree that performance measures are needed to assess the impact of their actions to manage the use of antibiotics in food animals. According to the White House’s 2015 National Action Plan for Combating Antibiotic-Resistant Bacteria, metrics should be established and implemented to foster stewardship of antibiotics in food animals within 3 years. FDA has a goal to enhance the safety and effectiveness of antibiotics and an objective to reduce risks in antibiotics by supporting efforts to foster the judicious use of medically important antibiotics in food animals. FDA’s actions to achieve this objective include developing voluntary guidance to industry and revising its veterinary feed directive regulation, as noted above. However, FDA does not yet have performance measures to assess the impact of these actions in achieving its goal and objective even though its revised regulation has already been implemented and actions recommended in its guidance were implemented as of January 2017. FDA officials told us the agency is taking steps to develop performance measures. In July 2016, FDA began reaching out to APHIS and producer groups to collaboratively develop metrics, according to FDA and APHIS officials. Furthermore, according to agency officials, FDA is collecting data in a pilot program for the veterinary feed directive to establish a baseline for compliance, which is needed to develop a measure. FDA officials told us developing measures is a challenge without funding to support farm-specific data to assess changes in antibiotic use practices and adherence to its guidance documents. It is unclear when FDA’s efforts to develop performance measures will be completed. Without developing performance measures and targets for its actions, FDA cannot assess the impact of its guidance to industry and its revised regulation in meeting the goal of enhancing the safety and effectiveness of antibiotics by fostering the judicious use of medically important antibiotics in food animals. Similar to FDA, APHIS does not have performance measures to assess the impact of its antibiotic use and resistance data collection efforts. In March 2016, APHIS agreed to develop goals and identify measures for its antibiotic resistance efforts by March 2017 as recommended by the USDA Office of Inspector General. However, little progress has been made. According to APHIS officials, if the agency does not receive new funding in fiscal year 2017 for antibiotic use and resistance activities, development of related goals and measures will be delayed. According to USDA’s 2012 report on antibiotic resistance, few useful metrics (i.e., performance measures) exist for gauging progress toward stated data collection goals. The report also stated that having defined metrics available would allow more appropriately focused efforts for monitoring antibiotic use and resistance and allow greater “buy in” among stakeholder groups for the monitoring efforts and their resulting actions. APHIS officials told us that performance measures and targets are needed and in July 2016, the agency began discussions with FDA and others about developing metrics, as noted above. Without developing performance measures and targets for its actions, APHIS cannot assess the impact of collecting farm-specific data on antibiotic use and resistance in meeting its goal to protect agricultural resources through surveillance for antibiotic-resistant bacteria. To manage the use of antibiotics in food animals and combat the emergence and spread of antibiotic-resistant bacteria, the Netherlands, Canada, Denmark and the EU have taken actions to strengthen the oversight of veterinarians’ and producers’ use of antibiotics and to collect farm-specific data. In addition, the Netherlands and Denmark have set targets for reducing the use of antibiotics, and the EU has called for measurable goals and indicators for antimicrobial use and resistance. To strengthen oversight and collect farm-specific data on antibiotic use in food animals, the Netherlands primarily relied on a public-private partnership, whereas Canada, Denmark, and the EU relied on government policies and regulations. After taking these actions, the use or sales (depending how the data were reported) of antibiotics for food animals decreased in Denmark, the Netherlands, and the EU, and data collection on antibiotic use improved in all three countries and the EU. Beginning in 2008, the Netherland’s food animal (cattle, veal, chicken, and swine) industries, national veterinary association, and government developed a public-private partnership to strengthen oversight of veterinarians’ prescriptions and producers’ use of antibiotics. This partnership was also used to collect farm-specific data. Government officials we interviewed from the Ministries of Health and Economic Affairs told us that in the past the Netherlands was one of the highest users of antibiotics in food animals in Europe. As a result of the partnership’s actions, from 2009 through 2015, antibiotic sales fell by over 50 percent, according to government documents. As part of the partnership, industry strengthened oversight of producers’ use of antibiotics through quality assurance programs—producer education and certification programs that set standards for animal production including the use of antibiotics—and the national veterinary association established additional guidelines and policies for veterinarians. According to the Ministry of Economic Affairs, building on these actions, the government adopted new statutes and regulations that incorporated some of the oversight activities that industry and veterinary organizations had established, such as restricting the use of antibiotics that are important to human health, implementing herd health plans, and developing prudent use guidelines. Similar to the Netherlands, U.S. producers and veterinarians participate in quality assurance programs and take action to promote judicious use of antibiotics, according to documents we reviewed from U.S. industry and veterinarian organizations. For example, some producers in the United States stopped the use of antibiotics for growth promotion prior to U.S. government action. The public-private partnership in the Netherlands also established a process for the continuous collection of farm-specific antibiotic use data. Specifically, in 2011, the different food animal industries and veterinary organizations leveraged their existing processes and infrastructure to create one centralized database for veterinarians and producers to report antibiotic prescriptions and use. In contrast, the United States relies primarily on an on-farm survey to collect antibiotic use data on a specific food animal every 5 to 7 years, as noted above. In 2010, the Netherlands’ government, food animal industries, and national veterinary association jointly financed an independent entity, the Netherlands Veterinary Medicines Authority, to analyze antibiotic use data and veterinary prescription patterns to produce annual antibiotic use reports, according to Dutch government documents. Representatives from the independent entity told us that the Netherlands’ government funds 50 percent of the cost and the food animal industries and veterinarians fund the remaining 50 percent. The Netherlands Veterinary Medicines Authority uses the data submitted by producers and veterinarians to define annual benchmarks regarding both the quantity and the types of antibiotics used within each sector. The industries use this information to monitor producers’ antibiotic use and veterinarians’ prescriptions, and they work with individuals who exceeded the benchmark to reduce use. According to Dutch government documents and officials, anonymized and aggregated data—including the amounts of antibiotics given, types of antibiotics, and number of animals that each veterinarian oversees—are shared with government for a variety of purposes, such as annual reports and other studies. Additionally, in 2016 the Netherlands Veterinary Medicines Authority published a report finding that reductions in antimicrobial usage, including antibiotics, were associated with reductions in the prevalence of antimicrobial-resistant E.coli in fecal samples from veal, calves, pigs, and young chickens. Dutch government officials told us that moving forward a variety of issues must be addressed, including overuse of antibiotics by veterinarians and producers—for example, in the veal and cattle sectors, which are challenged in decreasing antibiotics while keeping animals healthy. Similarly, a representative from a veterinary organization told us that under the new policies, veterinarians are challenged with greater administrative and record-keeping burdens. The Netherlands’ collaboration with industry is similar to some actions taken in the United States, such as the U.S. poultry industry’s effort to develop an on-farm antibiotic use survey and its plan to share aggregate survey data with APHIS and FDA, as discussed above. Additionally, FDA is actively engaging stakeholders to leverage public-private partnerships and collaboration to collect farm-specific data, according to FDA officials. However, the United States has no practice comparable to benchmarking. According to APHIS officials, benchmarking and measuring producers’ use and veterinarians’ prescriptions of antibiotics would require major infrastructure and technological investments for data capture, analysis, and reporting, and for educating producers and veterinarians regarding use of the data. According to representatives from an animal health company, it may not be feasible for the United States to adopt practices from the Netherlands because it would require similar or equal veterinary practice laws across all states. The Canadian government is working toward integrating federal and province-level policies on antibiotic use and collects farm-specific antibiotic use and resistance data on some species. The 2015 Canadian national action plan on antibiotic use and resistance calls for integration of federal-level and province-level policies and lists specific activities along with completion dates. Officials we interviewed from a Canadian food safety agency told us that Canada is developing a framework to align national and province-level veterinary oversight efforts and increase collaboration between these levels of government. Additionally, officials from a Canadian agency that regulates medical products told us that the federal government is working on a policy initiative to increase veterinary oversight over all medically important antimicrobials used in food animal production and that, as part of this initiative, they are working with provinces to ensure the streamlined transition of over-the-counter medically important antibiotics to prescription status. The national action plan also identifies the need for continued government support of industry-led quality assurance programs that address judicious use of antibiotics in food animals. For example, the Chicken Farmers of Canada’s On-Farm Food Safety Assurance program requires producers to keep records, called flock sheets, on each chicken flock. These sheets capture information related to animal health, including any antibiotics given to the bird during production, and must be presented prior to slaughtering. This differs from the United States where the poultry industry is vertically integrated—meaning that individual poultry companies own or contract for all phases of production and processing. Because of this integration, flock health information and production practices in the United States, including antibiotics used in feed or administered by a veterinarian, are maintained by the poultry company and not individual farmers. The national action plan also states that Canada is working toward removing growth promotion claims on antibiotics labels, similar to the U.S. approach, and that the pharmaceutical industry has voluntarily committed to comply by December 2016. According to one Canadian government official, data on antibiotic use in food animals have improved in recent years as a result of refinements to antibiotic sales data as well as farm-specific monitoring of antibiotic use in chickens, which has allowed officials to observe a relationship between changes in antibiotic use and resistance. For example, current data from the Canadian Integrated Program for Antimicrobial Resistance Surveillance show changes in resistant bacteria, isolated from chickens, associated with an intervention led by the poultry industry that focused on reducing the preventative use of a type of antibiotic called cephalosporin, according to Canadian government documents. According to an official from the Canadian Integrated Program for Antimicrobial Resistance Surveillance that we interviewed, the Canadian system is similar to the National Antimicrobial Resistance Monitoring System in the United States; however, unlike the U.S. system, the Canadian system has a farm surveillance component that captures information on antibiotic use, antibiotic resistance, and farm characteristics. The 2013 annual report from the Canadian Integrated Program for Antimicrobial Resistance Surveillance states that Canada initiated this surveillance component in a sample of farms in five major pork-producing provinces and in four major poultry-producing provinces in 2006 and 2013, respectively. In 2014, a total of 95 swine farms and 143 chicken farms participated in this voluntary program, according to the most recent (2014) annual report. The Canadian government compensates veterinarians to collect samples and gather data from each participating farms, according to a Canadian government official. Representatives from a veterinary organization we interviewed told us that surveillance data are good for looking at trends but that such data are limited and not appropriate to determine whether a producer is misusing antibiotics. One representative of the swine industry similarly told us that data collected from sample pig farms are limited and, to be more statistically representative of the industry, should be broadened to be more geographically representative and cover all types of pig production. While the Canadian farm surveillance program does not currently monitor antibiotic use and resistance in beef cattle on farms, the Canadian beef industry has funded research to develop an on-farm data collection framework and would welcome the addition of farm-specific antibiotic use and resistance surveillance to the program, according to representatives from a Canadian beef industry group we interviewed. Similar to the Canadian farm surveillance program, United States producers voluntarily participate in periodic surveys to provide antibiotic use data at the farm level, as part of National Animal Health Monitoring System; however, no U.S. program conducts longitudinal studies to collect data across time on antibiotic use, as noted above. Since we reported on Denmark’s actions to regulate antibiotic use in 2011, Denmark has developed a variety of policies focused on both producers’ and veterinarians’ use of antibiotics and has continued to monitor levels of antibiotic use, according to Danish government documents we reviewed and officials we interviewed. For example, officials from the Danish Veterinary and Food Administration explained that in 2013, they implemented a tax on the sale of antimicrobials, including antibiotics, and other drugs used in veterinary medicine. They told us that the initiative aims to strengthen veterinarians’ and producers’ incentive to choose alternatives to antimicrobial, including antibiotic, treatment or to choose the most responsible antimicrobial or antibiotic treatment—using antibiotics judiciously. One Danish industry representative told us that it is yet to be determined if the tax will be effective in reducing use, and that a high tax may lead to the illegal import of antibiotics. Officials from the Danish Veterinary and Food Administration also explained that other actions since 2011 include the introduction of legislation in 2014 on the treatment of swine herds. They stated that when veterinarians prescribe antibiotics to be administered through feed or water for respiratory or gastrointestinal infections, veterinarians must take samples from the herd for laboratory testing to verify the clinical diagnosis. Officials from the Danish Veterinary and Food Administration also indicated that Denmark has leveraged voluntary industry initiatives to manage the use of antibiotics, such as the cattle industry’s ban on the use of an antibiotic deemed critically important to human medicine. Denmark continues to collect farm-specific antibiotic use data through veterinary prescriptions and reports results along with resistance data annually via the Danish Integrated Antimicrobial Resistance Monitoring and Research Program, according to Danish government documents and officials. The most recent report states that antibiotic consumption was 47 percent lower in 2015 than in 1994 and decreased slightly from 2014 through 2015. As we previously reported, the lower levels of antibiotic beginning after 1994 coincide with changes to government policies on growth promotion and veterinarians’ sales profits. Representatives of U.S. industry and veterinary organizations we interviewed questioned whether the actions taken by Denmark were successful. They said while antibiotic use decreased, Denmark experienced issues with animal welfare, such as greater levels of disease, and increased the use of antibiotics for disease treatment. Danish officials acknowledged the concerns for animal welfare associated with reductions in antibiotic use, but documents they provided stated that they have not seen any evidence of decreased animal welfare or increases in infection prevalence. Representatives from a U.S. food industry organization and a veterinary organization told us that actions taken by Denmark are not feasible in the United States because of differences between the countries. For example, the food production industries in Denmark are different in size and production volume when compared with those in the United States, according to representatives from the U.S. poultry industry. Since 2011, when we last reported on the EU’s efforts, the EU has developed an antibiotic-resistance action plan, reported reductions in sales of antibiotics, and made associations between antibiotic use and resistance in a new report. The EU action plan calls for various actions to strengthen judicious use, oversight, and surveillance of antibiotics. According to EU documents, steps taken to implement the plan include, publishing guidelines for prudent use of antibiotics in veterinary medicine in 2015, enacting an animal health law in March 2016 that emphasizes prevention of disease rather than cure, and revising legislation for veterinary medicinal products and for medicated feed. In 2011, we reported on EU efforts to collect sales data; at that time only nine European countries had submitted data. For the 2016 report on EU sales, 29 European countries had submitted data, and the data show that from 2011 to 2014 sales of antibiotics for use in animals fell by approximately 2 percent in 25 European countries. One difference between the United States and the EU is the classification of certain antimicrobials, including antibiotics, in sales reports; for example, in the EU a group of medications called ionophores are not included in antimicrobial sales reports, but in the United States ionophores are included. According to EU documents we reviewed, other actions since 2011 include activities to promote the collection of on farm data, mainly through developing guidance and a pilot project. For example, a report from the European Medicines Agency, an agency within the EU, describes a trial conducted in 2014 to test a protocol and template for data collection on antimicrobial use in pigs. The report states that based on results from the trial the agency is preparing guidance, including a protocol and template, for member states on antibiotic use data collection. Additionally, the EU agency began a pilot study to collect antibiotic use data from twenty pig farms per country, but there was insufficient support among member states to continue the study, according to EU documents. Officials from the European Medicines Agency told us that the pilot project underscored the challenges in collecting farm-specific data which include producer confidentiality and resource constraints. However, these officials also told us that they have limited access to farm-specific data from certain countries, including Denmark, the Netherlands, and Norway. The EU also took steps to compare surveillance data on antibiotic use and resistance in pathogens in humans, food, animals, and environment. Specifically, in 2015 three EU agencies published the first integrated analysis report that found a positive association between the use of certain antibiotics in food animals and resistance in humans. For example, the report cited that a positive association was observed between fluoroquinolone resistance in E. coli from humans and the total consumption in animals. The report also explains that the agencies analyzed existing data from five separate monitoring systems, including sales data, to create the integrated report. In the United States, no such comparisons in surveillance reports have been made, in part because antibiotic use data are limited, as previously discussed. The Netherlands and Denmark set antibiotic use reduction targets to help manage the use of antibiotics in food animals. According to government officials in both countries, the targets were a critical component of their strategies to reduce antibiotics use. The Netherlands and Denmark used reduction targets to measure the progress and impact of actions taken, and as existing targets are reached these countries continue to set new targets. Similarly, the EU outlined its next steps for combating antibiotic resistance in a June 2016 document that calls for measureable goals that lead to reductions in infections in humans and animals and reductions in antibiotic use and resistance, among other things. U.S. federal officials and representatives of industry and veterinary organizations whom we interviewed questioned the usefulness of setting antibiotic use reduction targets in the United States, in part, because targets may reduce animal welfare. The Netherlands policy on reducing antibiotic use, implemented through the public-private partnership discussed above, set the following reduction targets on antibiotics used in food animals: 20 percent reduction in the sales of all antibiotics used in food animal production by 2011, 50 percent by 2013, and 70 percent by 2015. According to Dutch government officials, the first two targets were met and exceeded, but the 70-percent reduction by 2015 was not met; a 58-percent reduction was achieved from 2009 through 2015, according to government documents. Indicators used to measure the policy’s impact included antibiotic use and resistance levels in swine, mortality of swine, and veterinary cost per swine. According to a Dutch industry representative, to reduce the use of antibiotics, food animal industries optimized feed, housing, vaccines, and hygiene (see fig. 3). In a June 2015 letter to parliament, government officials proposed the Netherlands approach to antibiotic resistance for 2015 through 2019, which includes taking additional action to achieve the 70-percent reduction goal and developing species-specific measures and reduction targets. Representatives from veterinary and industry organizations in the Netherlands told us that setting targets has proven to be effective, but that there is concern that further reductions may pose some risk to animal health and welfare. For example, piglets may be at risk of premature death if certain antibiotics are prohibited or fewer antibiotics are used, according to Dutch veterinary and industry representatives. Representatives of veterinary and producer organizations we spoke with in the United States expressed similar concerns that reductions in antibiotic use may compromise animal health and welfare. In 2011, we reported on Denmark’s Yellow Card initiative, which set regulatory limits on antibiotic use and subjected pig producers exceeding limits to increased monitoring by government officials. The goal of the Yellow Card initiative was to achieve a 10-percent reduction in antibiotic use by 2013 from 2009 levels. According to government officials, the goal was met and exceeded. In 2016, Denmark expanded the Yellow Card initiative in pigs to focus more on antibiotics that are important for human health. It also developed an action plan to address methicillin-resistant Staphylococcus aureus (MRSA). Included in this plan is a new target of a 15 percent reduction in antibiotic use in swine by 2018. According to a representative from a Danish industry organization that represents producers across many food animal production sectors, producers who used antibiotics below the permitted levels began increasing their use to the maximum amounts allowed, and the new reduction target is a response to these increases. The representative also told us that reduction targets are critical because they place the responsibility for reduction on the producer or farmer—the person who determines what farm practices are implemented—and that reducing antibiotic use and setting reduction targets must be done with involvement of producers and veterinarians because the need for antibiotics varies across animals. For example, dairy cattle in different age groups use varying amounts of antibiotics, and setting one target may put the more susceptible age group at greater risk of infection or death, according to industry officials. In addition to the government targets, industry set its own targets to reduce the use of antibiotics. For example, the dairy and beef cattle industries set a target in 2014 to reduce use by 20 percent by 2018. Some U.S. officials and stakeholders question the benefits of antibiotic use targets and reductions in Denmark because while antibiotic use was reduced, changes in resistance are less clear. Representatives from the U.S. swine industry told us that targets based on volume of antibiotics used do not take into account the potency of the antibiotics, and that a mandatory reduction target could take antibiotic use in an unfavorable direction, such as a shift from veterinarians and producers using older drugs that are less potent, to using drugs that are more potent, newer, or important to human health. In 2016, the EU Council published a statement of its conclusions on the next steps for its member states to combat antimicrobial resistance including setting goals and targets. The statement calls for EU member states to have a one-health action plan by 2017 with measureable goals, qualitative or quantitative, that lead to reduction in infections in humans and animals, reductions in antimicrobial use and resistance, and prudent antimicrobial use. The statement also calls for EU officials and member states to jointly develop a new EU action plan on antimicrobial resistance, indicators to assess the progress made on addressing antibiotic resistance, and indicators to assess progress in implementing the new action plan. EU officials told us that the EU is seeking to develop indicators that are easy to measure, are not too costly, and can be applied across its member states. Representatives of U.S. industry and veterinary organizations we interviewed stated that they would support measures and targets that focus on compliance with judicious use policies, but not on reductions. CDC, APHIS, and FSIS officials told us they have not conducted on-farm investigations during outbreaks from foodborne illness including those from antibiotic-resistant pathogens in animal products. Moreover, there is no consensus about when an on-farm investigation is needed. In 2014, recognizing the importance of the one-health concept (health of humans, animals, and the environment are interconnected) FSIS and APHIS created a memorandum of understanding and standard operating procedures for APHIS to investigate the root cause of foodborne illness outbreaks, given APHIS’s regular interactions with producers on farms and expertise in veterinary epidemiology. Under the memorandum of understanding, APHIS will conduct epidemiological investigations—which includes examining the spread of disease by time, place, and animal as well as the mode of transmission and source of entry of disease—to determine the root cause of foodborne illness, which may be related to factors at the farm level, according to FSIS officials. Such investigations can be used to identify on-farm risk factors for disease occurrence or spread that might be controlled or mitigated by some intervention in current or future situations. For multistate foodborne illness outbreaks, CDC is to identify the outbreak and lead the investigation by determining the DNA fingerprint of the bacteria that cause the outbreak as well as whether or not the bacteria is resistant to any antibiotics. According to CDC officials, with increasing use of whole genome sequencing—an advanced technique to fingerprint bacteria—federal agencies may prioritize foodborne outbreak investigations from antibiotic-resistant bacteria because they can identify these outbreaks sooner. CDC is to coordinate with state health departments and FSIS if a meat or poultry product is implicated (see fig. 4 for more information on the investigation process for multistate foodborne illness outbreaks). However, APHIS and FSIS did not conduct on-farm investigations in response to a multistate foodborne illness outbreak in 2015 involving an antibiotic-resistant strain of Salmonella in roaster pigs, the first attempt to use the 2014 memorandum of understanding. We determined this is because stakeholders—industry, state agencies, and federal agencies— did not agree on whether on-farm investigations were needed as part of the 2015 outbreak investigation. Specifically, FSIS, the pork industry, and a state agriculture agency agreed that the slaughter plant was the source of the outbreak, negating the need for an on-farm investigation in their view, while state public health agencies wanted on-farm investigations to determine whether the pigs from the five farms supplying the slaughter plant were carriers of the outbreak strain and to identify the slaughter plants that received the pigs. CDC and APHIS deferred to FSIS on whether an on-farm investigation was needed. According to FSIS officials, the outbreak was attributed to conditions and practices at the slaughter plant and the company implemented extensive corrective actions at the plant in response to the 2015 outbreak. However, in July 2016, FSIS issued a public health alert because of concerns about illnesses from another outbreak linked to the Salmonella strain from the 2015 outbreak involving whole roaster pigs; the same slaughter plant was implicated in the 2016 outbreak. CDC officials told us that resistance for this specific strain of Salmonella has increased for a variety of drugs and that an on- farm investigation would have been useful in the original outbreak to explore whether the outbreak strain was present in pigs while they were still on the farm. FSIS and the Washington State Department of Health investigated the 2016 outbreak, but no on-farm investigations were conducted. The implicated slaughter plant recalled products and the outbreak ended, according to Washington state officials. As of October 2016, FSIS and APHIS were continuing discussions and making plans on how best to address the need to enhance understanding of this Salmonella strain in live pigs, especially how to identify on-farm interventions that may prevent future illness, according to FSIS officials. APHIS and FSIS officials told us that deciding when to conduct investigations on the farm is complex. First, the memorandum of understanding requires producer’s consent to conduct an on-farm investigation. The memorandum of understanding outlines the need for producer’s consent, in part, because neither APHIS nor FSIS has authority to access farms during foodborne illness outbreaks without the cooperation of the producer. APHIS will contact the producer or company involved to discuss the specifics of an investigation and to gain voluntary participation in any investigation. CDC has authority to take actions to prevent the interstate spread of communicable diseases, which, according to CDC legal officials, would include diseases originating on farms that may relate to foodborne illness from antibiotic- resistant pathogens. Specifically, CDC has authority to take measures in the event of inadequate state or local control to prevent interstate communicable disease spread. To the extent that CDC would use this authority, CDC would generally work with APHIS and FSIS on issues relevant to their expertise, according to CDC officials. Second, deciding whether an outbreak is likely due to on-farm risk factors versus ones that are largely the result of in-plant problems is difficult because every outbreak is unique, according to FSIS officials. FSIS is less likely to request APHIS assistance if there is evidence of insanitary conditions—a condition in which edible meat and poultry products may become contaminated or unsafe—at the slaughter plant. However, the APHIS and FSIS memorandum of understanding does not include a decision-making framework to determine the need for an on-farm investigation; instead it focuses on the procedures for and division of responsibilities in assessing the root cause of an outbreak. In contrast, APHIS uses a decision matrix when determining whether it will pursue epidemiological assessments on the farm during other types of investigations, such as investigations of animal disease outbreaks. According to FSIS Directive 8080.3, the objectives of foodborne illness investigation include identifying contributing factors to the foodborne illness, including outbreaks, and recommending actions or new policies to prevent future occurrences. The White House’s 2015 National Action Plan for Combating Antibiotic-Resistant Bacteria includes a 3-year milestone for USDA to begin coordinated investigations of emerging antibiotic-resistant pathogens on the farm and at slaughter plants under the one-health surveillance goal. The objective for this milestone emphasizes coordination among federal agencies, producers, and other stakeholders. Coordination with the stakeholders who have the authority and who control access to the farm could help APHIS and FSIS fully investigate an outbreak. Specifically, CDC has authority to cooperate with and assist state and local governments with epidemiologic investigations and to take actions to prevent the spread of communicable diseases in the event of inadequate local control, including diseases originating on farms. In addition, involving stakeholders from industry and state departments of agriculture could increase the likelihood of obtaining producers’ consent to on-farm investigations. Developing a framework for deciding when on-farm investigations are warranted during outbreaks, in coordination with CDC and other stakeholders, would help APHIS and FSIS identify factors that contribute to or cause foodborne illness outbreaks, including those from antibiotic-resistant pathogens in animal products. Ensuring the continued effectiveness of antibiotics, particularly those used in human medicine, is critical because the rise of antibiotic-resistant bacteria poses a global threat to public health. Since 2011, HHS and USDA agencies have taken actions to increase veterinary oversight of medically important antibiotics used in the feed and water of food animals and to collect more detailed antibiotic sales, use, and resistance data. However, these actions do not address long-term and open-ended use of medically important antibiotics because some antibiotics do not have defined durations of use on their labels. Without developing a process to establish appropriate durations of use on labels of all medically important antibiotics, FDA will not know whether it is ensuring judicious use of medically important antibiotics in food animals. In addition, FDA officials told us the agency is developing a plan that outlines its key activities over the next 5 years to further support antimicrobial stewardship in veterinary settings, including steps to bring the use of medically important antibiotics administered in other dosage forms (not feed or water) under veterinary oversight. However, FDA was unable to provide us with this plan or provide specifics about the steps outlined in the plan because it was still under development. A published plan with steps is critical to guide FDA’s efforts in ensuring the judicious use of medically important antibiotics in food animals. HHS and USDA agencies continue to move forward with data collection activities including new initiatives, but data gaps remain. For more than a decade, we have reported on the need for HHS and USDA to work together to obtain more detailed farm-specific data on antibiotic use and resistance to address the risk of antibiotic resistance. In 2004, we recommended that HHS and USDA jointly develop and implement a plan for collecting data on antibiotic use in food animals that would support understanding the relationship between use and resistance, among other things. In 2011, we again recommended that HHS and USDA identify approaches for collecting detailed data on antibiotic use to assess the effectiveness of policies to curb antibiotic resistance, among other things. Although HHS and USDA agreed with these recommendations, they have not developed a joint plan to collect such data. We continue to believe that developing a joint plan for collecting data to further assess the relationship between antibiotic use and resistance at the farm level is essential and will help maximize resources and reduce the risk of duplicating efforts at a time when resources are constrained. To assess the impact of agency actions to manage the use of antibiotics in food animals, FSIS finalized a performance measure, but FDA and APHIS have not developed any such measures or related targets, which is not consistent with leading practices for federal strategic planning and performance measurement. Without developing performance measures and targets for their actions, FDA and APHIS cannot assess impacts of their efforts to manage the use antibiotics in food animals. In addition, although APHIS and FSIS established a memorandum of understanding in 2014 to assess the root cause of foodborne illness outbreaks, the memorandum does not include a decision-making framework for determining when on-farm investigations are needed. In the first use of the memorandum in a 2015 outbreak, there was no consensus among stakeholders on when such investigations were needed. Developing a framework for deciding when on-farm investigations are warranted during outbreaks, in coordination with CDC and other stakeholders, would help APHIS and FSIS identify factors that contribute to or cause foodborne illness outbreaks, including those from antibiotic-resistant pathogens in animal products. The Secretary of Health and Human Services should direct the Commissioner of FDA to take the following three actions: Develop a process, which may include time frames, to establish appropriate durations of use on labels of all medically important antibiotics used in food animals. Establish steps to increase veterinary oversight of medically important antibiotics administered in routes other than feed and water, such as injections and tablets. Develop performance measures and targets for actions to manage the use of antibiotics such as revising the veterinary feed directive and developing guidance documents on judicious use. The Secretary of Agriculture should take the following three actions: Direct the Administrator of APHIS to develop performance measures and targets for collecting farm-specific data on antibiotic use in food animals and antibiotic-resistant bacteria in food animals. Direct the Administrator of APHIS and the Administrator of FSIS to work with the Director of CDC to develop a framework for deciding when on-farm investigations are warranted during outbreaks. We provided a draft of this report to the Secretaries of Agriculture and Health and Human Services for review and comment. USDA and HHS provided written comments, reproduced in appendixes IV and V, respectively. USDA agreed with our recommendations. The department stated that it will develop performance measures and targets for collecting farm-specific data on antibiotic use in farm animals and antibiotic- resistant bacteria. USDA also agreed that a decision matrix to support multi-agency cooperation and to determine when on farm investigations are warranted, could be a useful addition, and noted that it has similar matrices that can serve as a model for antimicrobial resistance investigations. HHS neither agreed nor disagreed with our recommendations. USDA and HHS also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Agriculture, the Secretary of Health and Human Services, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This report (1) examines actions the U.S. Department of Health and Human Services (HHS) and U.S. Department of Agriculture (USDA) have taken since 2011 to manage the use of antibiotics in food animals and to assess the impact of their actions, (2) identifies actions that selected countries and the European Union (EU) have taken to manage the use of antibiotics in food animals, and (3) examines the extent to which HHS and USDA have conducted on-farm investigations of outbreaks of foodborne illness from antibiotic-resistant pathogens in animal products. To examine actions HHS and USDA have taken since 2011 to manage the use of antibiotics in food animals and to assess the impact of their actions, we reviewed relevant statutes and regulations, agencies’ plans and guidance, and stakeholders’ reports related to managing the use of antibiotics in food animals. We also reviewed USDA’s Office of Inspector General report on USDA’s actions to manage the use of antibiotics in food animals. We reviewed federal data reports on antibiotic sales, use, and resistance and asked officials about the quality of these data. Based on these steps, we determined that the data were sufficiently reliable for our purpose of illustrating actions taken to improve data collection. We compared information from federal agencies about actions taken to manage the use of antibiotics with federal standards for internal controls. We also reviewed public comments submitted to HHS regarding data collection on farms and changes to the Animal Drug User Fee Act. We interviewed federal officials and representatives of stakeholder organizations about federal actions taken to manage the use of antibiotics since 2011. These stakeholder organizations, represented national food animal industries (National Chicken Council, National Turkey Federation, U.S. Poultry and Egg Association, National Pork Producers Council, National Pork Board, and National Milk Producers Federation); veterinarians (American Association of Avian Pathologists, American Association of Bovine Practitioners, American Association of Swine Veterinarians, and American Veterinary Medicine Association); the pharmaceutical industry (Animal Health Institute and Zoetis); consumer advocates (Keep Antibiotics Working, National Resource Defense Council, and Center for Science in the Public Interest); and others (Cattle Empire, American Feed Industry Association, Farm Foundation, and Pew Charitable Trusts). In addition, we interviewed representatives of several companies (producers and restaurants) that provide food products from animals raised without antibiotics to obtain a better understanding of production practices; the types of antibiotic use data available at the farm level; and perspectives on federal efforts to educate producers about antibiotics. The views of representatives we spoke with are not generalizable to other companies. In addition, we compared federal agencies’ actions with relevant goals outlined in the 2015 National Action Plan for Combating Antibiotic-Resistant Bacteria and interviewed representatives of stakeholder organizations to obtain views on agencies’ efforts taken to date. To examine agencies’ efforts to assess the impact of their actions, we reviewed HHS and USDA agencies’ strategic plans and we identified any relevant goals, measures, and targets developed by federal agencies. We compared the measures and targets with agencies’ goals, National Action Plan goals and milestones, and leading practices for improving agency performance—specifically, practices identified in the GPRA Modernization Act of 2010 and our prior work on performance management. To identify actions that selected countries and the EU have taken to manage the use of antibiotics in food animals since 2011, we reviewed documents, statutes, regulations, published studies, and surveillance reports regarding animal antibiotic use and resistance in Canada, Denmark, the Netherlands, and the EU. We selected these countries and this region because they have taken actions to mitigate antibiotic resistance by managing the use of antibiotics in food animals. Additionally, each country and region met at least one of the following criteria: (1) have food animal production practices similar to those of the United States (Canada); (2) have taken actions over the last 10 years to manage the use of antibiotics in food animals (the EU and Denmark); and (3) have novel practices to manage the use of antibiotics in food animals (the Netherlands). Moreover, Denmark and the Netherlands are EU members that have made changes beyond EU directives to manage the use of antibiotics in food animals. We interviewed government officials either in person or by phone from Health Canada, the Public Health Agency of Canada, Agriculture and Agri-Food Canada, the Canadian Food Inspection Agency, and the Office of the Auditor General of Canada; the Danish Veterinary and Food Administration; the Netherlands Ministry of Health, Welfare and Sport, the Netherlands Ministry of Economic Affairs and Netherlands Food and Consumer Product Safety Authority; and the European Union Directorate General for Health and Food Safety and the European Medicines Agency. Additionally, we visited a swine facility in the Netherlands to learn about production practices. We also interviewed representatives of the Netherlands Veterinary Medicines Authority, an independent agency that monitors the use of antibiotics in food animals, defines antibiotic use benchmarks, and reports on antibiotic use trends, among other things. Finally, we interviewed representatives from veterinary and food animal industry organizations in the United States, Canada, Denmark, and the Netherlands; a U.S. organization that represents pharmaceutical companies that manufacture animal health products; as well as researchers in the field. We did not independently verify statements made about the EU practices or about the selected countries’ statutes and regulations. We reviewed the methodologies of the studies provided to us and found them reasonable for presenting examples of the selected countries and the EU efforts. To examine the extent to which HHS and USDA conducted on-farm investigations of outbreaks of foodborne illness from antibiotic-resistant pathogens in animal products, we reviewed HHS’s Centers for Disease Control and Prevention and USDA’s Animal and Plant Health Service (APHIS) and Food Safety and Inspection Service (FSIS) documentation, including directives, relevant to investigations of foodborne illness outbreaks, as well as the 2014 APHIS-FSIS memorandum of understanding and corresponding standard operating procedures to access farms for investigations during such outbreaks. We also reviewed documentation on a 2015 Salmonella outbreak that we identified as the only outbreak in which APHIS and FSIS used their memorandum of understanding. We interviewed federal and state officials (Washington and Montana) who investigated the 2015 outbreak. We also interviewed federal officials about the agencies’ authority to conduct on-farm investigations during foodborne illness outbreaks, including those involving antibiotic-resistant pathogens. We conducted this performance audit from August 2015 to March 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As of January 2017, medically important antimicrobials, including antibiotics, identified by the U.S. Department of Health and Human Services’ Food and Drug Administration (FDA) may only be used in the feed and water of food animals under the supervision of licensed veterinarians, according to FDA officials. Table 3 shows the antibiotics which changed dispensing status to require veterinary oversight. Some companies that sell meat and poultry products have taken steps to eliminate or reduce the use of antibiotics in food animals and label products coming from these animals with claims related to “no antibiotic use.” We interviewed representatives of six such companies—specifically, three producers and three restaurants. Representatives of four of the six companies—three producers and one restaurant—told us that consumer demand was one of the main reasons why their companies took action to reduce or eliminate the use of antibiotics in food animals, and representatives of the two other companies—both restaurants—stated that their companies took action for reasons related to human and animal health. As part of their efforts, companies implemented various on-farm practices, such as changing animal housing and using alternatives to antibiotics. For example, according to one company representative, the company provided larger housing to reduce crowding and promoted the use of probiotics to improve animal health. Representatives told us that their companies seek to ensure animal welfare and will use antibiotics to treat sick animals; however, these animals are removed from the product line and sold as conventional products. Representatives of these companies also shared challenges they face in raising animals and selling food animal products without antibiotics. For example, one producer told us there is a lack of antibiotic alternatives, and that drug companies do not always produce alternatives for all species of food animals. Restaurant representatives with whom we spoke said that a challenge in providing meat and poultry products from animals raised without antibiotics is that supply is limited; for example, companies only buy certain parts of the animal, but the supplier needs to sell all parts, which may limit the availability of suppliers willing to specialize in animals raised without antibiotics. Additionally, company representatives told us that it is more difficult for pork and beef producers than poultry producers to raise animals without antibiotics because the supply chain for poultry is vertically integrated—meaning that the same company generally owns the animal from birth through processing—but the supply chains for pork and beef are not. The companies we interviewed use various terms for their label claim related to no antibiotic use, such as “no antibiotics ever,” “no human antibiotics,” “raised without antibiotics,” and “raised without antibiotics important to human health.” To include these or similar claims on their product labels, companies must submit to the U.S. Department of Agriculture’s (USDA) Food Safety and Inspection Service (FSIS) detailed records from the production process that support the accuracy of the claim. All company representatives we interviewed told us their companies collect and report data related to the production practices for their products. For example, one company requires its suppliers to report quarterly on antimicrobials used and the reason for use. Another representative told us that the company collects numerous data points throughout the year, including all medicines used on the farm and feed history, to validate antibiotic use compliance by its suppliers with company policies. Company representatives we spoke with agreed that there is some confusion among consumers regarding products sold and marketed as being from animals raised without antibiotics. One company representative told us that consumers are unaware that antibiotic use claims refer to animal raising practices rather than the presence of antibiotics in food products and that all meat and poultry products are tested when presented for slaughter to ensure antibiotic residues are below allowable government limits. Under its National Residue Program, FSIS monitors meat, poultry, and processed egg products for chemical residues, including antibiotics. Additionally, the Food and Drug Administration requires, as a condition of use on the product label, withdrawal periods for antibiotics—that is, periods of time prior to slaughter when antibiotics cannot be used. Another company representative told us that there is confusion about the various marketing claims used by companies, such as “no hormones” and “no antibiotics.” FSIS officials told us that the agency is aware of the concerns industry and consumers may have regarding the various claims on products currently in the marketplace. In September 2016, FSIS released labeling guidance that provides information about claims frequently used on products, what they mean, and how they are evaluated for accuracy. In regard to label claims related to antibiotic use, the guidance describes the requirements needed to make a claim, provides examples of terms that may be used, and lists the documentation needed for approval of the claim. FSIS is also considering rulemaking to define and clarify the varied language used in the “raised without antibiotics” claim, according to officials. Companies may choose to further differentiate their products in the marketplace through participating in certification, audit, or other programs, such as USDA’s National Organic Program or Process Verified Program. Products may carry the USDA organic seal if companies and their products are certified by a USDA certifying agent to be in accordance with USDA organic regulations, which include not treating animals with antibiotics. Similarly, a company may use the process verified seal on their products if one or more of their agricultural processes, such as raising animals without antibiotics, is verified through an audit by USDA. Unlike the National Organic Program, under the Process Verified Program companies establish their own processes and standards. As a result, processes and standards may vary across the companies. In addition, the constraints on antibiotic use do not need to meet statutory or regulatory requirements, leading to differing standards. For example, one company may have a process verified program for no antibiotics ever, and another may have a program for no antibiotics important to human health. Representatives from five of the six companies we spoke with told us that for some products they participate in USDA’s Process Verified Program to verify antibiotic use claims. In addition to the contact named above, Mary Denigan-Macauley (Assistant Director), Nkenge Gibson, Cynthia Norris, Benjamin Sclafani, and Bryant Torres made significant contributions to the report. Also contributing to the report in their areas of expertise were Kevin Bray, Gary Brown, Robert Copeland, Michele Fejfar, Benjamin Licht, Sushil Sharma, and Sara Sullivan.
According to the World Health Organization, antibiotic resistance is one of the biggest threats to global health. CDC estimates antibiotic-resistant bacteria cause at least 2 million human illnesses in the United States each year, and there is strong evidence that some resistance in bacteria is caused by antibiotic use in food animals (cattle, poultry, and swine). HHS and USDA are primarily responsible for ensuring food safety, including safe use of antibiotics in food animals. In 2011, GAO reported on antibiotic use and recommended addressing gaps in data collection. GAO was asked to update this information. This report (1) examines actions HHS and USDA have taken to manage use of antibiotics in food animals and assess the impact of their actions, (2) identifies actions selected countries and the EU have taken to manage use of antibiotics in food animals, and (3) examines the extent to which HHS and USDA conducted on-farm investigations of foodborne illness outbreaks from antibiotic-resistant bacteria in animal products. GAO reviewed documents and interviewed officials and stakeholders. GAO selected three countries and the EU for review because they have taken actions to mitigate antibiotic resistance. Since 2011, when GAO last reported on this issue, the Department of Health and Human Services (HHS) has increased veterinary oversight of antibiotics and, with the Department of Agriculture (USDA), has made several improvements in collecting data on antibiotic use in food animals and resistance in bacteria. For example, HHS's Food and Drug Administration (FDA) issued a regulation and guidance for industry recommending changes to drug labels. However, oversight gaps still exist. For example, changes to drug labels do not address long-term and open-ended use of antibiotics for disease prevention because some antibiotics do not define duration of use on their labels. FDA officials told GAO they are seeking public comments on establishing durations of use on labels, but FDA has not clearly defined objectives for closing this gap, which is inconsistent with federal internal control standards. Without doing so, FDA will not know whether it is ensuring judicious use of antibiotics. Moreover, gaps in farm-specific data on antibiotic use and resistance that GAO found in 2011 remain. GAO continues to believe HHS and USDA need to implement a joint on-farm data collection plan as previously recommended. In addition, FDA and USDA's Animal and Plant Health Inspection Service (APHIS) do not have metrics to assess the impact of actions they have taken, which is inconsistent with leading practices for performance measurement. Without metrics, FDA and APHIS cannot assess the effects of actions taken to manage the use of antibiotics. Three selected countries and the European Union (EU), which GAO reviewed, have taken various actions to manage use of antibiotics in food animals, including strengthening oversight of veterinarians' and producers' use of antibiotics, collecting farm-specific data, and setting targets to reduce antibiotic use. The Netherlands has primarily relied on a public-private partnership, whereas Canada, Denmark, and the EU have relied on government policies and regulations to strengthen oversight and collect farm-specific data. Since taking these actions, the use or sales of antibiotics in food animals decreased and data collection improved, according to foreign officials and data reports GAO reviewed. Still, some U.S. federal officials and stakeholders believe that similar U.S. actions are not feasible because of production differences and other factors. HHS and USDA officials said they have not conducted on-farm investigations during foodborne illness outbreaks including those from antibiotic-resistant bacteria in animal products. In 2014, USDA agencies established a memorandum of understanding to assess the root cause of foodborne illness outbreaks. However, in 2015 in the agencies' first use of the memorandum, there was no consensus among stakeholders on whether to conduct foodborne illness investigations on farms and the memorandum does not include a framework to make this determination, similar to a decision matrix used in other investigations. According to a directive issued by USDA's Food Safety and Inspection Service, foodborne illness investigations shall include identifying contributing factors and recommending actions or new policies to prevent future occurrences. Developing a framework, in coordination with HHS's Centers for Disease Control and Prevention (CDC) and other stakeholders, would help USDA identify factors that contribute to or cause foodborne illness outbreaks, including those from antibiotic-resistant bacteria in animal products. GAO is making six recommendations, including that HHS address oversight gaps, HHS and USDA develop metrics for assessing progress in achieving goals, and USDA develop a framework with HHS to decide when to conduct on-farm investigations. USDA agreed and HHS neither agreed nor disagreed with GAO's recommendations.
You are an expert at summarizing long articles. Proceed to summarize the following text: According to FPS officials, the agency has required its guards to receive training on how to respond to an active-shooter scenario since 2010. However, as our 2013 report shows, FPS faces challenges providing active-shooter response training to all of its guards. We were unable to determine the extent to which FPS’s guards have received active-shooter response training, in part, because FPS lacks a comprehensive and reliable system for guard oversight (as discussed below). When we asked officials from 16 of the 31 contract guard companies we contacted if their guards had received training on how to respond during active-shooter incidents, responses varied.companies we interviewed about this topic: For example, of the 16 contract guard officials from eight guard companies stated that their guards had received active-shooter scenario training during FPS orientation; officials from five guard companies stated that FPS had not provided active-shooter scenario training to their guards during the FPS- provided orientation training; and officials from three guard companies stated that FPS had not provided active-shooter scenario training to their guards during the FPS- provided orientation training, but that the topic was covered at some other time. Without ensuring that all guards receive training on how to respond to active-shooter incidents, FPS has limited assurance that its guards are prepared for this threat. According to FPS officials, the agency provides guards with information on how they should respond during an active-shooter incident as part of the 8-hour FPS-provided orientation training. FPS officials were not able to specify how much time is devoted to this training, but said that it is a small portion of the 2-hour special situations training. According to FPS’s training documents, this training includes instructions on how to notify law enforcement personnel, secure the guard’s area of responsibility, and direct building occupants according to emergency plans as well as the appropriate use of force. As part of their 120 hours of FPS-required training, guards must receive 8 hours of screener training from FPS on how to use x-ray and magnetometer equipment. However, in our September 2013 report, we found that FPS has not provided required screener training to all guards. Screener training is important because many guards control access points at federal facilities and thus must be able to properly operate x-ray and magnetometer machines and understand their results. In 2009 and 2010, we reported that FPS had not provided screener training to 1,500 contract guards in one FPS region. In response to those reports, FPS stated that it planned to implement a program to train its inspectors to provide screener training to all its contract guards by September 2015. Information from guard companies we contacted indicate that guards who have never received this screener training continue to be deployed to federal facilities. An official at one contract guard company stated that 133 of its approximately 350 guards (about 38 percent) on three separate FPS contracts (awarded in 2009) have never received their initial x-ray and magnetometer training from FPS. The official stated that some of these guards are working at screening posts. Officials at another contract guard company in a different FPS region stated that, according to their records, 78 of 295 (about 26 percent) guards deployed under their contract have never received FPS’s x-ray and magnetometer training. These officials stated that FPS’s regional officials were informed of the problem, but allowed guards to continue to work under this contract, despite not having completed required training. Because FPS is responsible for this training, according to guard company officials, no action was taken against the company. Consequently, some guards deployed to federal facilities may be using x- ray and magnetometer equipment that they are not qualified to use─thus raising questions about the ability of some guards to execute a primary responsibility to properly screen access control points at federal facilities. In our September 2013 report, we found that FPS continues to lack effective management controls to ensure that guards have met training and certification requirements. For example, although FPS agreed with our 2012 recommendations to develop a comprehensive and reliable system to oversee contract guards, it still has not established such a system. Without a comprehensive guard management system, FPS has no independent means of ensuring that its contract guard companies have met contract requirements, such as providing qualified guards to federal facilities. Instead, FPS requires its guard companies to maintain files containing guard-training and certification information. The companies are then required to provide FPS with this information each month. In our September 2013 report, we found that 23 percent of the 276 guard files we reviewed (maintained by 11 of the 31 guard companies we interviewed) lacked required training and certification documentation. As shown in table 1, some guard files lacked documentation of basic training, semi-annual firearms qualifications, screener training, the 40-hour refresher training (required every 3 years), and CPR certification. FPS has also identified guard files that did not contain required documentation. FPS’s primary tool for ensuring that guard companies comply with contractual requirements for guards’ training, certifications, and qualifications is to review guard companies’ guard files each month. From March 2012 through March 2013, FPS reviewed more than 23,000 guard files. It found that a majority of the guard files had the required documentation but more than 800 (about 3 percent) did not. FPS’s file reviews for that period showed files missing, for example, documentation for screener training, initial weapons training, CPR certification, and firearms qualifications. As our September 2013 report explains, however, FPS’s process for conducting monthly file reviews does not include requirements for reviewing and verifying the results, and we identified instances in which FPS’s monthly review results did not accurately reflect the contents of guard files. For instance, FPS’s review indicated that required documentation was present for some guard files, but for some of those files we were not able to find (for example) documentation of training and certification, such as initial weapons training, DHS orientation, and pre- employment drug screenings. As a result of the lack of management controls, FPS is not able to provide reasonable assurance that guards have met training and certification requirements. We found in 2012 that FPS did not assess risks at the 9,600 facilities under the control and custody of GSA in a manner consistent with federal standards, although federal agencies paid FPS millions of dollars to assess risk at their facilities. Our March 2014 report examining risk assessments at federal facilities found that this is still a challenge for FPS and several other federal agencies. Federal standards such as the National Infrastructure Protection Plan’s (NIPP) risk management framework and ISC’s RMP call for a risk assessment to include a threat, vulnerability, and consequence assessment. Risk assessments help decision-makers identify and evaluate security risk and implement protective measures to mitigate risk. Moreover, risk assessments play a critical role in helping agencies tailor protective measures to reflect their facilities’ unique circumstances and enable them to allocate security resources effectively. Instead of conducting risk assessments, FPS uses an interim vulnerability assessment tool, referred to as the Modified Infrastructure Survey Tool (MIST), with which it assesses federal facilities until it develops a longer- term solution. According to FPS, MIST allows it to resume assessing federal facilities’ vulnerabilities and recommend countermeasures— something FPS has not done consistently for several years. MIST has some limitations. Most notably, it does not assess consequence (the level, duration, and nature of potential loss resulting from an undesirable event). Three of the four risk assessment experts we spoke with generally agreed that a tool that does not estimate consequences does not allow an agency to fully assess risks. FPS officials stated that it intends to eventually incorporate consequence into its risk assessment methodology and is exploring ways to do so. MIST was also not designed to compare risks across federal facilities. Consequently, FPS does not have the ability to comprehensively manage risk across its portfolio of 9,600 facilities and recommend countermeasures to federal tenant agencies. As of April 2014, according to an FPS official, FPS had used MIST to complete vulnerability assessments of approximately 1,200 federal facilities in fiscal year 2014 and have presented approximately 985 of them to the facility security committees. The remaining 215 assessments were under review by FPS. FPS has begun several initiatives that, once fully implemented, should enhance its ability to protect the more than 1 million federal employees and members of the public who visit federal facilities each year. Since fiscal year 2010, we have made 31 recommendations to help FPS address its challenges with risk management, oversight of its contract guard workforce, and its fee-based funding structure. DHS and FPS have generally agreed with these recommendations. As of May 2014, as shown in table 2, FPS had implemented 6 recommendations, and was in the process of addressing 10 others, although none of the 10 have been fully implemented. The remaining 15 have not been implemented. According to FPS officials, the agency has faced difficulty in implementing many of our recommendations because of changes in its leadership, organization, funding, and staffing levels. For further information on this testimony, please contact Mark Goldstein at (202) 512-2834 or by email at [email protected]. Individuals making key contributions to this testimony include Tammy Conquest, Assistant Director; Geoff Hamilton; Jennifer DuBord; and SaraAnn Moessbauer. Federal Facility Security: Additional Actions Needed to Help Agencies Comply with Risk Assessment Methodology Standards. GAO-14-86. Washington, D.C.: March 5, 2014. Homeland Security: Federal Protective Service Continues to Face Challenges with Contract Guards and Risk Assessments at Federal Facilities. GAO-14-235T. Washington, D.C.: December 17, 2013. Homeland Security: Challenges Associated with Federal Protective Service’s Contract Guards and Risk Assessments at Federal Facilities. GAO-14-128T. Washington, D.C.: October 30, 2013. Federal Protective Service: Challenges with Oversight of Contract Guard Program Still Exist, and Additional Management Controls Are Needed. GAO-13-694. Washington, D.C.: September 17, 2013. Facility Security: Greater Outreach by DHS on Standards and Management Practices Could Benefit Federal Agencies. GAO-13-222. Washington, D.C.: January 24, 2013. Federal Protective Service: Actions Needed to Assess Risk and Better Manage Contract Guards at Federal Facilities. GAO-12-739. Washington, D.C.: August 10, 2012. Federal Protective Service: Actions Needed to Resolve Delays and Inadequate Oversight Issues with FPS’s Risk Assessment and Management Program. GAO-11-705R. Washington, D.C.: July 15, 2011. Federal Protective Service: Progress Made but Improved Schedule and Cost Estimate Needed to Complete Transition. GAO-11-554. Washington, D.C.: July 15, 2011. Homeland Security: Protecting Federal Facilities Remains a Challenge for the Department of Homeland Security’s Federal Protective Service. GAO-11-813T. Washington, D.C.: July 13, 2011. Federal Facility Security: Staffing Approaches Used by Selected Agencies. GAO-11-601. Washington, D.C.: June 30, 2011. Budget Issues: Better Fee Design Would Improve Federal Protective Service’s and Federal Agencies’ Planning and Budgeting for Security, GAO-11-492. Washington, D.C.: May 20, 2011. Homeland Security: Addressing Weaknesses with Facility Security Committees Would Enhance Protection of Federal Facilities, GAO-10-901. Washington, D.C.: August 5, 2010. Homeland Security: Preliminary Observations on the Federal Protective Service’s Workforce Analysis and Planning Efforts. GAO-10-802R. Washington, D.C.: June 14, 2010. Homeland Security: Federal Protective Service’s Use of Contract Guards Requires Reassessment and More Oversight. GAO-10-614T. Washington, D.C.: April 14, 2010. Homeland Security: Federal Protective Service’s Contract Guard Program Requires More Oversight and Reassessment of Use of Contract Guards. GAO-10-341. Washington, D.C.: April 13, 2010. Homeland Security: Ongoing Challenges Impact the Federal Protective Service’s Ability to Protect Federal Facilities. GAO-10-506T. Washington, D.C.: March 16, 2010. Homeland Security: Greater Attention to Key Practices Would Improve the Federal Protective Service’s Approach to Facility Protection. GAO-10-142. Washington, D.C.: October 23, 2009. Homeland Security: Preliminary Results Show Federal Protective Service’s Ability to Protect Federal Facilities Is Hampered by Weaknesses in Its Contract Security Guard Program, GAO-09-859T. Washington, D.C.: July 8, 2009. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Recent incidents at federal facilities demonstrate their continued vulnerability to attacks or other acts of violence. As part of the Department of Homeland Security (DHS), FPS is responsible for protecting federal employees and visitors in approximately 9,600 federal facilities under the control and custody the General Services Administration (GSA). To help accomplish its mission, FPS conducts facility security assessments and has approximately 13,500 contract security guards deployed to federal facilities. FPS charges fees for its security services to federal tenant agencies. This testimony discusses challenges FPS faces in (1) ensuring contract security guards deployed to federal facilities are properly trained and certified and (2) conducting risk assessments at federal facilities. It is based on GAO reports issued from 2009 through 2014 on FPS's contract guard and risk assessment programs. To perform this work, GAO reviewed FPS and guard company data and interviewed officials about oversight of guards. GAO compared FPS's and eight federal agencies' risk assessment methodologies to ISC standards that federal agencies must use. GAO selected these agencies based on their missions and types of facilities. GAO also interviewed agency officials and 4 risk management experts about risk assessments. The Federal Protective Service continues to face challenges ensuring that contract guards have been properly trained and certified before being deployed to federal facilities around the country. In September 2013, for example, GAO reported that providing training for active shooter scenarios and screening access to federal facilities poses a challenge for FPS. According to officials at five guard companies, their contract guards have not received training on how to respond during incidents involving an active shooter. Without ensuring that all guards receive training on how to respond to active-shooter incidents at federal facilities, FPS has limited assurance that its guards are prepared for this threat. Similarly, an official from one of FPS's contract guard companies stated that 133 (about 38 percent) of its approximately 350 guards have never received screener training. As a result, guards deployed to federal facilities may be using x-ray and magnetometer equipment that they are not qualified to use raising questions about their ability to fulfill a primary responsibility of screening access control points at federal facilities. GAO was unable to determine the extent to which FPS's guards have received active-shooter response and screener training, in part, because FPS lacks a comprehensive and reliable system for guard oversight. GAO also found that FPS continues to lack effective management controls to ensure its guards have met its training and certification requirements. For instance, although FPS agreed with GAO's 2012 recommendations that it develop a comprehensive and reliable system for managing information on guards' training, certifications, and qualifications, it still does not have such a system. Additionally, 23 percent of the 276 contract guard files GAO reviewed did not have required training and certification documentation. For example, some files were missing items such as documentation of screener training, CPR certifications, and firearms qualifications. Assessing risk at federal facilities remains a challenge for FPS. GAO found in 2012 that federal agencies pay FPS millions of dollars to assess risk at their facilities, but FPS is not assessing risks in a manner consistent with federal standards. In March 2014, GAO found that this is still a challenge for FPS and several other agencies. The Interagency Security Committee's (ISC) Risk Management Process for Federal Facilities standard requires federal agencies to develop risk assessment methodologies that, among other things, assess the threat, vulnerability, and consequence to undesirable events. Risk assessments help decision-makers identify and evaluate security risks and implement protective measures. Instead of conducting risk assessments, FPS uses an interim vulnerability assessment tool, referred to as the Modified Infrastructure Survey Tool (MIST) to assess federal facilities until it develops a longer-term solution. However, MIST does not assess consequence (the level, duration, and nature of potential loss resulting from an undesirable event). Three of the four risk assessment experts GAO spoke with generally agreed that a tool that does not estimate consequences does not allow an agency to fully assess risks. Thus, FPS has limited knowledge of the risks facing about 9,600 federal facilities around the country. FPS officials stated that consequence information in MIST was not part of the original design, but they are exploring ways to incorporate it. Since fiscal year 2010, GAO has made 31 recommendations to improve FPS's contract guard and risk assessment processes, of which 6 were implemented, 10 are in process, and 15 have not been implemented.
You are an expert at summarizing long articles. Proceed to summarize the following text: Located in FAA’s Office of Aviation Safety, the Aircraft Certification Service (Aircraft Certification) and Flight Standards Service (Flight Standards) issue certificates and approvals for the operators and aviation products used in the national airspace system based on standards set forth in federal aviation regulations. FAA inspectors and engineers working in Aircraft Certification and Flight Standards interpret and implement the regulations governing certificates and approvals via FAA policies and guidance, such as orders, notices, and advisory circulars. Aircraft Certification’s approximately 950 engineers and inspectors in 38 field offices issue approvals to the designers and manufacturers of aircraft and aircraft engines, propellers, parts, and equipment, including the avionics and other equipment required for the Next Generation Air Transportation System (NextGen)—a federal effort to transform the U.S. national airspace system from a ground-based system of air traffic control to a satellite-based system of air traffic management. These approvals are issued in three areas: (1) design—including type certificates for new aircraft, engine, or propeller designs, amended type certificates (issued only to the type certificate holder) for derivative models, and supplemental type certificates for major changes to existing designs by either the type certificate holder or someone other than the original type certificate holder; (2) production—including production certificates, which certify a manufacturer’s ability to build an aircraft, engine, or propeller in accordance with an FAA-approved design, and parts manufacturer approvals for spare and replacement parts; and (3) flight approval—original airworthiness certificates and approvals for newly manufactured aircraft, engines, propellers, and parts. Aircraft Certification, along with Flight Standards, provides a safety performance management system intended to assure the continued operational safety of all aircraft operating in the national airspace system and of U.S.-built aircraft operating anywhere in the world. Aircraft Certification is also responsible for the appointment and oversight of designees and delegated organizations that play a critical role in acting on behalf of FAA to perform many certification and approval activities, such as the issuance of design and airworthiness approvals for aircraft parts. Since 2005, Aircraft Certification has used project sequencing to prioritize certification submissions on the basis of available resources. Projects are evaluated against several criteria, including safety attributes and their impact on the air transportation system. In fiscal year 2009, Aircraft Certification issued 4,248 design approvals, 2,971 production approvals, and 508 airworthiness certificates. Figure 1 shows the Aircraft Certification approvals issued for fiscal years 2005 through 2009. As of June 2010, according to FAA, Aircraft Certification had a backlog of 47 projects. (According to a senior FAA official, the number of approvals decreased from fiscal year 2006 to fiscal year 2007 because Aircraft Certification implemented a new data collection system in fiscal year 2007 that improved data collection definitions and processes.) Figure 2 contains key information about Aircraft Certification’s organization, and figure 3 indicates key phases in Aircraft Certification’s product approvals process. Flight Standards’ nearly 4,000 inspectors issue certificates allowing individuals and entities to operate in the national airspace system. Flight Standards also issues approvals for programs, such as training and minimum equipment lists. Flight Standards field office managers in over 100 field offices use the Certification Services Oversight Process to initiate certification projects within their offices. According to FAA, the field offices are also assisted by a headquarters-based office that provides experts on specific aircraft and airlines. Accepted projects are processed on a first-in, first-out basis within each office once FAA determines that it has the resources to oversee an additional new certificate holder. Flight Standards issued 599 air operator and air agency certificates in fiscal year 2009. These include certificates to commercial air carriers under 14 C.F.R. part 121, operators of smaller commercial aircraft under 14 C.F.R. part 135, repair stations under 14 C.F.R. part 145, and pilot schools and training centers under 14 C.F.R. parts 141 and 142, respectively. According to its Director, Flight Standards also issues over 6,000 approvals daily. Figure 4 shows the number of air operator and air agency certificates issued by Flight Standards in fiscal years 2005 through 2009. FAA officials noted that certification projects within and among the categories of air operators and air agencies require various amounts of FAA resources. For example, FAA indicated that an agricultural operator certification requires fewer FAA resources than a repair station certification. Additionally, certifications of small commercial aircraft operations that are single pilot, single plane require a different set of resources than operations that are dual pilot and/or fly more aircraft. As of July 2010, Flight Standards had 1,142 certifications in process and a backlog of 489 applications. According to an FAA official, Flight Standards has more wait-listed applications than Aircraft Certification because it receives numerous requests for certificates, and its certifications are substantially different in nature from those issued by Aircraft Certification. Flight Standards is also responsible for assuring the continued operational safety of the national airspace system by overseeing certificate holders, monitoring (along with Aircraft Certification) operators’ and air agencies’ operation and maintenance of aircraft, and overseeing designees and delegated organizations. Flight Standards inspectors were tasked with overseeing 13,089 air operators and air agencies, such as repair stations, as of March 2010. Unless assigned to a large commercial air carrier issued a certificate under part 121, a Flight Standards inspector is typically responsible for overseeing several entities that often perform different or several functions within the system—including transporting passengers, repairing aircraft, and training pilots. Figures 5 and 6 contain key information about Flight Standards’ organization and certification process for air operators and air agencies. Studies we reviewed and aviation stakeholders and experts we spoke with indicated that variation in FAA’s interpretation of standards for certification and approval decisions is a long-standing issue that affects both Aircraft Certification and Flight Standards, but the extent of the problem has not been quantified in the industry as a whole. Inconsistent or variant FAA interpretations have been noted in studies published over the last 14 years. A 1996 study by Booz Allen & Hamilton, conducted at the request of the FAA Administrator to assess challenges to the agency’s regulatory and certification practices, reported that, for air carriers and other operators, the agency’s regulations are often ambiguous; subject to variation in interpretation by FAA inspectors, supervisors, and policy managers; and in need of simplification and consistent implementation. A 1999 task force, convened at the request of the FAA Administrator to assess FAA’s certification process, found that the agency’s requirements for the various approvals—such as type certificates and supplemental type certificates—varied substantially because of differences in standards and inconsistent application of those standards by different FAA field offices. While FAA has put measures in place since these two reports appeared, a 2008 Independent Review Team, which was commissioned by the Secretary of Transportation to assess FAA’s safety culture and approach to safety management, found that a wide degree of variation in “regulatory ideology” among FAA staff continues to create the likelihood of wide variation in decisions within and among field offices. Industry officials and experts representing a broad range of large and small aviation businesses told us that variation in interpretation and subsequent decisions occurs in both Aircraft Certification and Flight Standards, but we found no evidence that quantified the extent of the problem in the industry as a whole. Specifically, 10 of the 13 industry group and individual company representatives we interviewed said that they or members of their organization experienced variation in FAA’s certification and approval decisions on similar submissions; the remaining 3 industry representatives did not raise variation in interpretations and decisions as an issue. For example, an official from one air carrier told us that variation in decisions occurs regularly when obtaining approvals from Flight Standards district offices, especially when dealing with inspectors who are newly hired or replacing a previous inspector. He explained that new inspectors often task air carriers to make changes to previously obtained minimum equipment lists or conformity approvals for an aircraft. The official further noted that inspector assignments often change for reasons such as transfers, promotions, or retirement and that four different principal operations inspectors were assigned to his company during the past 18 months. Experts on our panel and most industry officials we interviewed indicated that, though variation in decisions is a long-standing, widespread problem, it has rarely led to serious certification and approval process problems. Experts on our panel generally noted that serious problems with the certification and approval processes occur less than 10 percent of the time. However, when we asked them to rank certification and approval process problems we summarized from their discussion, they chose inconsistent interpretation of regulations, which can lead to variation in decisions, as the most significant problem for Flight Standards and as the second most significant problem for Aircraft Certification. Panelists’ concerns about variation in decisions included instances in which approvals are reevaluated and sometimes revised or revoked in FAA jurisdictions other than those in which they were originally granted. Industry officials we interviewed, though most had experienced it, did not mention the frequency with which variation in decisions occurred. However, 8 of the 13 said that their experiences with FAA’s certification and approval processes were generally free of problems compared with 3 who said they regularly experienced problems with the process. FAA’s Deputy Associate Administrator for Aviation Safety and union officials representing FAA inspectors and engineers acknowledged that variation in certification and approval decisions occurs. The Deputy Associate Administrator noted that variation in interpretation and certification and approval decisions occurs in both Aircraft Certification and Flight Standards. He acknowledged that a nonstandardized process for approvals exists and has been a challenge for, and a long-term criticism of, the agency. Furthermore, he explained that efforts were being made to address the issue, including the establishment of (1) an Office of Aviation Safety quality management system (QMS) to standardize processes across Aircraft Certification and Flight Standards, (2) a process for industry to dispute FAA decisions, and (3) standardization offices within Aircraft Certification directorates. The first two efforts are discussed in greater detail later in this report. Variation in FAA’s interpretation of standards and certification and approval decisions occurs as a result of factors related to performance- based regulations and the use of professional judgment by FAA inspectors and engineers, according to industry stakeholders. FAA uses performance- based regulations, which identify a desired outcome and are flexible about how the outcome is achieved. For example, performance-based regulations on aircraft braking would establish minimum braking distances for aircraft but would not call for a particular material in the brake pads or a specific braking system design. According to officials in FAA’s rulemaking office, about 20 percent of FAA’s regulations are performance-based. Performance-based regulations, which are issued governmentwide, provide a number of benefits, according to literature on the regulatory process. By focusing on outcomes, for example, performance-based regulations give firms flexibility in achieving the stated level of performance; such regulations can accommodate technological change in ways that prescriptive regulations that focus on a specific technology generally cannot. For those certifications and approvals that relate to performance-based regulations, variation in decisions is a consequence of such regulations, according to one air carrier, since performance-based regulations allow the applicant multiple avenues to comply with regulations and broader discretion by FAA staff in making certification and approval decisions. According to senior FAA officials, performance-based regulations allow innovation and flexibility while setting a specific safety standard. The officials added that the benefits of performance-based regulations outweigh the potential for erroneous interpretation by an individual inspector or engineer. While agreeing with this statement, a panel member pointed out that the potential for erroneous interpretation also entails a risk of inconsistent decisions. In addition, FAA oversees a large, diverse industry, and its certification and approval processes rely, in part, on FAA staffs’ exercise of professional judgment in the unique situations they encounter. In the opinion of senior FAA officials, some differences among inspectors may be due to situation-specific factors that industry stakeholders may not be aware of. According to officials from Flight Standards, because differences may exist among regions and district offices, operators changing locations may encounter these differences. Many industry stakeholders and experts stated that FAA’s certification and approval processes contribute positively to the safety of the national airspace system. For example, industry stakeholders who participated in our expert panel ranked the office’s safety culture and record as the greatest strength of Flight Standards’ certification and approval processes and the third greatest strength of Aircraft Certification’s processes. Industry stakeholders and experts also noted that the certification and approval processes work well most of the time because of FAA’s long- standing collaboration with industry, flexibility within the processes, and committed, competent FAA staff. In most instances, stakeholders and experts said, when industry seeks certifications and approvals, its experiences with FAA’s processes are positive. For example, two aviation manufacturers and an industry trade association with over 400,000 members noted that most of their experiences or their members’ experiences were positive. Seventeen of 19 panelists indicated positive or very positive experiences with Aircraft Certification, and 9 of 19 panelists indicated positive experiences with Flight Standards. Panelists ranked FAA’s collaboration with applicants highly—as the second greatest strength of both Aircraft Certification and Flight Standards. In addition, representatives of two trade associations representing over 190 aviation companies said that the processes provide flexibility for a large, diverse industry. Additionally, panelists ranked FAA inspectors’ and engineers’ expertise as the greatest strength of Aircraft Certification and the third greatest strength of Flight Standards, while officials from two industry trade groups cited the inspectors’ and engineers’ competence and high level of expertise. Industry stakeholders and experts noted that negative certification and approval experiences, although infrequent, can result in costly delays for them, which can disproportionately affect smaller operators. While industry stakeholders indicated that negative experiences occur in dealings with both Aircraft Certification and Flight Standards, experts on our panel noted that negative experiences are more likely to occur with Flight Standards than with Aircraft Certification. For example, three experts noted that, overall, industry’s experience in obtaining certifications and approvals from Flight Standards has been negative or very negative, while no experts thought industry’s experience with Aircraft Certification was negative. The panelists indicated that negative experiences occur during the processing of certifications and approvals and as applicants wait for FAA resources to become available to commence their certification or approval projects. For example, an aviation industry representative reported that his company incurred a delay of over 5 years and millions of dollars in costs when it attempted to obtain approvals from Aircraft Certification and Flight Standards field offices. Another industry representative indicated that it abandoned an effort to obtain an operating certification after spending $1.2 million and never receiving an explanation from FAA as to why the company’s application was stalled. One panelist indicated that the negative experiences focus more on administrative aspects of the certification and approval processes and not on safety-related items. The processing of original certifications and approvals in Aircraft Certification and Flight Standards involves progressing through a schedule of steps or phases. Responsibilities of both FAA and the applicant are delineated. However, even with this framework in place, industry stakeholders noted that the time it takes to obtain certifications and approvals can differ from one FAA field office to another because of differences in office resources and expertise. In some cases, delays may be avoided when FAA directs the applicant to apply at a different field office. Nevertheless, applicants who must apply to offices with fewer resources can experience costly delays in obtaining certifications or approvals. Delays also occur when FAA wait-lists certification submissions because it does not have the resources to begin work on them. Aircraft Certification meets weekly to review certification project submissions. If it determines that a submission is to be wait-listed, the applicant is sent a 90-day delay letter and if, after the initial 90 days, the submission is still wait-listed, the applicant is sent another letter. Additionally, Aircraft Certification staff and managers periodically contact applicants to advise them of the status of their submissions. Flight Standards also notifies applicants when their certification submissions are wait-listed, and Flight Standards staff are encouraged to communicate with applicants regularly about the status of their submissions. However, according to an FAA notice, staff are advised not to provide an estimate of when an applicant’s submission might be processed. While Aircraft Certification tracks in a national database how long individual submissions are wait-listed, Flight Standards does not. Without data on how long submissions are wait-listed, Flight Standards cannot assess the extent of wait-listing delays or reallocate resources to better meet demand. Further, industry stakeholders face uncertainty with respect to any plans or investments that depend on obtaining a certificate in a timely manner. Industry stakeholders have also raised concerns about the effects of inefficiencies in the certification and approval processes on the implementation of NextGen. As NextGen progresses, operators will need to install additional equipment on their aircraft to take full advantage of NextGen capabilities, and FAA’s certification and approval workload is likely to increase substantially. According to our October 2009 testimony on NextGen, airlines and manufacturers said that FAA’s certification processes take too long and impose costs on industry that discourage them from investing in NextGen equipment. We reported that this inefficiency in FAA’s processes constitutes a challenge to delivering NextGen benefits to stakeholders and that streamlining FAA’s processes will be essential for the timely implementation of NextGen. FAA is working to address the certification issues that may impede the adoption and acceleration of NextGen capabilities. Flight Standards has identified NextGen-dedicated staff in each of its regional offices to support the review and approval of NextGen capabilities within each region. Aircraft Certification has created a team of experts from different offices to coordinate NextGen approvals and identify specialists in Aircraft Certification offices with significant NextGen activity. FAA also plans a number of other actions to facilitate the certification and approval of NextGen-related technology, including new procedures and criteria for prioritizing certifications, updating policy and guidance, developing additional communication mechanisms, and developing training for inspectors and engineers. Since many of these actions have either just been implemented or have not yet been completed, it is too early to tell whether they will increase the efficiency of FAA’s certification and approval processes and reduce unanticipated delays and costs for the industry. Industry stakeholders also noted that the efficiency of the certification and approval processes was hampered by a lack of sufficient staff resources to carry out certifications and approvals and a lack of effective communication mechanisms for explaining the intent of the regulations to both FAA staff and industry. The stakeholders said that these inefficiencies have resulted in costly delays for them. Stakeholders and experts said that, at some FAA offices, delays in obtaining certifications and approvals were due to heavy staff workloads, a lack of staff, or a lack of staff with the appropriate expertise. Staff and managers at one FAA field office told us that in the past a lack of staff had contributed to delays in completing certifications. The relative priority of certifications and approvals within FAA’s overall workload also affects the availability of staff to process certifications and approvals. According to FAA, its highest priority is overseeing the continued operational safety of the people and products already operating within the national airspace system, but the same staff who provide this oversight are also tasked with the lower-priority task of processing new certifications and approvals. Additionally, Flight Standards field staff we contacted said that the system under which their pay grades are established and maintained provides a disincentive for inspectors to perform certification work because the system allocates no credit toward retention of their pay grades for doing certification work. Flight Standards headquarters officials pointed out that there is an incentive for field office inspectors to perform initial certifications because once certificated the new entities add points to an inspector’s complexity calculation, which is used to determine his or her pay grade. FAA has addressed staff resource issues by increasing the number of inspectors and engineers. Over the past 3 years, FAA has steadily increased its hiring of Aircraft Certification engineers and Flight Standards inspectors, thereby reducing the risk of certification delays. According to agency data, FAA’s hiring efforts since fiscal year 2007 have resulted in an 8.8 percent increase in the number of Aircraft Certification engineers and a 9.4 percent increase in the number of Flight Standards inspectors on board. FAA hired 106 engineers in Aircraft Certification and 696 inspectors in Flight Standards from the beginning of fiscal year 2007 to March 15, 2010. FAA also hired 89 inspectors in Aircraft Certification from fiscal year 2007 through August 2010. In addition, Flight Standards headquarters staff are available to assist field staff with the certification of part 121 air carriers—an average of 35 of these staff were available for this assistance annually from 2005 through 2009, and they helped with 16 certification projects. Furthermore, FAA delegates many certification activities to individuals and organizations (called designees) to better leverage its resources. As we previously reported, FAA’s designees perform more than 90 percent of FAA’s certification activities. We have reported that designees generally conduct routine certification functions, such as approvals of aircraft technologies that the agency and designees already have experience with, allowing FAA staff to focus on new and complex aircraft designs or design changes. Panelists ranked the expanded use of designees second and fifth, respectively, among actions that we summarized from their discussions that would have the most positive impact on improving Aircraft Certification’s and Flight Standards’ certification and approval processes. FAA is increasing organizational delegations under its organization designation authorization (ODA) program and expects the ODA program will allow more effective use of its resources over time. Stakeholders pointed to a lack of effective communication mechanisms as another problem with the certification and approval processes, especially deficiencies in the guidance FAA issues and a lack of additional communication mechanisms for sharing information on the interpretation of regulations. Stakeholders said that the lack of effective communication mechanisms can lead to costly delays when, for example, methods or guidance for complying with regulations is not clear. Stakeholders and experts had several issues with the FAA guidance that interprets the regulations and provides supplemental information to the industry. Stakeholders said there are sometimes discrepancies between the guidance and the regulations. For example, one stakeholder reported informing an FAA training course instructor that a particular piece of guidance contradicted the regulations. The instructor agreed that the contradiction existed but told the stakeholder that FAA teaches to the guidance, not the regulations. One employee group representing some FAA inspectors was concerned that not all guidance has been included in an online system that FAA has established to consolidate regulations, policy, and guidance. FAA acknowledged that it is working to further standardize and simplify the online guidance in the Flight Standards information management system. Stakeholders also identified a lack of opportunities for sharing information about the interpretation of regulations and guidance. An industry expert noted that FAA lacks a culture that fosters communication and discussion among peer groups. Moreover, an industry group with over 300 aviation company members suggested that FAA should support and promote more agencywide and industrywide information sharing in less formal, less structured ways to enhance communication. Finally, according to an official of an employee group representing some FAA inspectors, because their workloads tend to be heavy, inspectors are less able to communicate with the companies they oversee, and the reduced level of communication contributes to variation in the interpretation of regulations. FAA officials disagreed with these assertions and indicated that FAA staff participate in numerous committees and conferences, share methods of compliance in technical areas via forums with stakeholders, and communicate resolutions to problems in various formats, such as by placing legal decisions online. Other FAA actions could identify and potentially address some of the shortcomings in the agency’s certification and approval processes as follows: In 2004, FAA’s Office of Aviation Safety introduced QMS, which is intended to ensure that processes are being followed and improved and to provide a methodology to standardize processes. QMS is expected to help ensure that processes are followed by providing a means for staff to report nonconformance with FAA procedures or processes and was established as part of the office’s effort to achieve certification by the International Organization for Standardization (ISO). Any employee can submit a report and check the status of an issue that has been reported. From October 2008 to March 2009, approximately 900 reports were submitted, and 46 internal audits were completed. For example, in July 2009, an FAA staffer noted that a required paragraph on aging aircraft inspection and records review was missing from a certificate holder’s operations specifications. The issue was resolved and closed in August 2009 when the missing paragraph was issued to the certificate holder. Some FAA staff told us that QMS has helped improve the processes because it requires management action to respond to report submissions. To provide industry stakeholders with a mechanism for appealing certification and other decisions, the Office of Aviation Safety implemented the Consistency and Standardization Initiative (CSI) in 2004. Appeals must begin at the field office level and can eventually be taken to FAA headquarters. CSI requires that FAA staff document their safety decisions and that stakeholders support their positions with specific documentation. Within Aircraft Certification and Flight Standards, CSI cases at each appeal level are expected to be processed within 30 working days. The total length of the CSI process depends on how many levels of appeal the stakeholder chooses. Aircraft Certification has had over 20 CSI cases, and Flight Standards has had over 300. Most CSI cases in Aircraft Certification involved clarification of a policy or an approved means of complying with a regulation, while most of those submitted to Flight Standards involved policy or method clarification, as well as scheduling issues, such as delays in addressing a stakeholder’s certification, approval, or other issue. The large discrepancy between the number of cases filed for the two services, according to FAA officials, may be due to the fact that Aircraft Certification decisions are the result of highly interactive, deliberative processes, which are not typical in granting approvals in Flight Standards, where an inspector might find the need to hand down a decision without prolonged discussion or deliberation. Stakeholders told us that CSI lacks agencywide buy-in and can leave stakeholders who use the program potentially open to retribution from FAA staff. However, others noted that CSI is beneficial because it requires industry stakeholders to use the regulations as a basis for their complaints, which often leads to resolution. According to one of our panelists, inconsistencies occur when FAA does not start with the regulations as the basis for decisions. Although QMS and CSI are positive steps toward identifying ways to make the certification and approval processes more efficient, FAA does not know whether the programs are achieving their stated goals because it has not established performance measures for determining program accomplishments. One of the goals for QMS is to reduce inconsistencies and increase standardization. A QMS database documents the reports submitted and, through information in these reports, FAA says it has identified instances of nonconformance and initiated corrective action to prevent recurrence; revised orders to ensure they are consistent with actual practice; and improved its processes to collect feedback from stakeholders and take action on trends. However, FAA does not know whether its actions have reduced inconsistencies because its measures describe the agency’s output—for example, number of audits conducted— rather than any outcomes related to reductions in process inconsistencies. FAA officials described CSI goals as promoting early resolution of disagreements and consistency and fairness in applying FAA regulations and policies. They provided us with data on the number of CSI cases in both Aircraft Certification and Flight Standards, the types of complaints, and the percentage of resolutions that upheld FAA’s original decision, but as with the overall QMS program, we could find no evidence that FAA has instituted CSI performance measures that would allow it to determine progress toward program outcomes, such as consistency and fairness in applying regulations and policies. Outcome-based performance measures would also allow QMS and CSI program managers to determine where to better target program resources to improve performance. FAA has taken actions to address variation in decisions and inefficiency in its certification and approval processes, although the agency does not have outcome-based performance measures and a continuous evaluative process to determine if these actions are having their intended effects. Because the number of certification and approval applications is likely to increase for NextGen technologies, achieving more efficiency in these processes will help FAA better manage this increased workload, as well as its current workload. In addition, while both Aircraft Certification and Flight Standards notify applicants whether resources are available to begin their projects, Flight Standards does not monitor how long applicants are wait-listed and is therefore unaware how long projects are wait-listed and unable to reallocate resources to better meet demand for certification services. To ensure that FAA actions contribute to more consistent decisions and more efficient certification and approval processes, we recommend that the Secretary of Transportation direct the Administrator of FAA to take the following two actions: Determine the effectiveness of actions to improve the certification and approval processes by developing a continuous evaluative process and use it to create measurable performance goals for the actions, track performance toward those goals, and determine appropriate process changes. To the extent that this evaluation of agency actions identifies effective practices, consider instituting those practices agency wide. Develop and implement a process in Flight Standards to track how long certification and approval submissions are wait-listed, the reasons for wait-listing them, and the factors that eventually allowed initiation of the certification process. Use the data generated from this process to assess the extent of wait-listing delays and to reallocate resources, as appropriate, to better meet demand. We provided a copy of a draft of this report to the Department of Transportation (DOT) for its review and comment. DOT provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 21 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Transportation, the Administrator of FAA, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions or would like to discuss this work, please contact me at (202) 512-2834 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report provides information on the Federal Aviation Administration’s (FAA) processes for granting certifications and approvals to air operators, air agencies such as repair stations, and designers and manufacturers of aircraft and aircraft components. It describes the processes and discusses (1) the extent of variation in FAA’s interpretation of standards with regard to the agency’s certification and approval decisions and (2) key stakeholder and expert views on how well the certification and approval processes work. To address these objectives, we reviewed relevant studies, reports, and FAA documents and processes; convened a panel of aviation industry and other experts; and interviewed aviation industry members, an expert, and FAA officials. We did not address FAA processes for issuing certifications to individuals, such as pilots and mechanics. We contracted with the National Academy of Sciences (the Academy) to convene a panel on FAA’s certification and approval processes on December 16, 2009. The panel was selected with the goal of obtaining a balance of perspectives and included FAA senior managers; officials representing large and small air carriers, aircraft and aerospace product manufacturers, aviation services firms, repair stations, geospatial firms, and aviation consultants; and academics specializing in aviation and organization theory. (See table 1.) In the first session, FAA and industry officials presented their organizations’ perspectives on these processes and responded to questions. The presenters then departed and did not participate in the remaining sessions. In the next three discussion sessions, the panelists— led by a moderator—shared their views on various aspects of FAA’s certification and approval processes. After the first two discussion sessions, panelists voted in response to questions posed by GAO. (See app. II for the questions and responses.) The views expressed by the panelists were their own and do not necessarily represent the views of GAO or the Academy. We shared a copy of an earlier draft of this report with all of the presenters and panelists for their review and to ensure that we correctly captured information from their discussions and, on the basis of their comments, made technical corrections to the draft as necessary. We interviewed aviation industry certificate and approval holders, trade groups, an industry expert, officials of unions that represent FAA inspectors and engineers, and FAA staff in Aircraft Certification and Flight Standards (see table 2). The industry and trade groups were selected to provide a range of large and small companies and a variety of industry sectors (e.g., aircraft and parts manufacturers, air carriers, and repair stations). The interviews were conducted to gain an understanding of the extent of variation in FAA’s certification and approval decisions and interviewees’ views on FAA’s certification and approval processes. The FAA interviews provided an understanding of the key aspects of FAA’s certification and approval processes, information on data collection and analysis related to the processes, and current and planned process improvement efforts. In addition to using information from the individual interviews, as relevant throughout the report, we analyzed the content of the interviews to identify and quantify the key issues raised by the interviewees. This appendix summarizes the responses the panelists provided to questions we posed at the close of their discussion sessions. The response options were based on the contents of their discussions. To develop the rankings in questions 1, 2, and 12, we asked the panelists, in a series of three questions, to vote for the option he or she believed was the first, second, and third greatest, most significant, or most positive. To rank order the items listed for these questions, we assigned three points to the option identified as greatest, most significant, or most positive; two points to the second greatest, most significant, or most positive; and one point to the third greatest, most significant, or most positive option. We then summed the weighted values for each option and ranked the options from the highest number of points to the lowest. 1. What is the greatest strength of the certification and approval processes? 2. What is the most significant problem with the certification and approval processes? 3. What leading factor has contributed to problems with the certification and approval processes? Leading factor in process problems FAA’s prioritization system for managing certifications and approvals FAA’s rulemaking process and development of guidance (e.g., amount of time required to develop or change regulations, etc.) Culture of FAA (e.g., stove-piping, resistance to change, etc.) Organizational structure of FAA (e.g., decentralization, varying procedures among local offices, etc.) 4. How often do serious problems occur each year with the certification and approval processes? 5. Overall, how positive or negative do you think industry’s experience has been in obtaining certifications and approvals from Aircraft Certification and Flight Standards? 6. How would you assess the overall impact of the certification and approval processes on the safety of the national airspace system? 7. Overall, how would you characterize efforts to improve the certification and approval processes? 8. Overall, how would you characterize efforts to prioritize certifications and approvals? 9. Overall, how would you characterize efforts to improve dispute resolution through the Consistency and Standardization Initiative (CSI)? 10. Regarding efforts to improve dispute resolution through CSI, what is the key factor hindering the progress of efforts? 11. What should be done to mitigate the effects of this factor? FAA should establish support for efforts FAA should improve data collection and analysis related to efforts Do not believe efforts are ineffective Do not know/no basis to judge This response option was not available to the panelists. 12. What action will have the most positive impact on improving the certification and approval processes? Expand use of designees/organization designation authorizations (ODA) Expand use of designees/organization designation authorizations (ODA) Gerald L. Dillingham, Ph.D., (202) 512-2834 or [email protected]. In addition to the contact named above, Teresa Spisak (Assistant Director), Sharon Dyer, Bess Eisenstadt, Amy Frazier, Brandon Haller, Dave Hooper, Michael Silver, and Pamela Vines made key contributions to this report.
Among its responsibilities for aviation safety, the Federal Aviation Administration (FAA) issues thousands of certificates and approvals annually. These certificates and approvals, which FAA bases on its interpretation of federal standards, indicate that such things as new aircraft, the design and production of aircraft parts and equipment, and new air operators are safe for use in the national airspace system. Past studies and industry spokespersons assert that FAA's interpretations produce variation in its decisions and inefficiencies that adversely affect the industry. GAO was asked to examine the (1) extent of variation in FAA's interpretation of standards for certification and approval decisions and (2) views of key stakeholders and experts on how well these processes work. To perform the study, GAO reviewed industry studies and reports and FAA documents and processes; convened a panel of aviation experts; and interviewed officials from various industry sectors, senior FAA officials, and unions representing FAA staff. Studies, stakeholders, and experts indicated that variation in FAA's interpretation of standards for certification and approval decisions is a long-standing issue, but GAO found no evidence that quantified the extent of the problem in the industry as a whole. Ten of the 13 industry group and company officials GAO interviewed said that they or members of their organization had experienced variation in FAA certification and approval decisions on similar submissions. In addition, experts on GAO's panel, who discussed and then ranked problems with FAA's certification and approval processes, ranked inconsistent interpretation of regulations, which can lead to variation in decisions, as the first and second most significant problem, respectively, with these processes for FAA's Flight Standards Service (which issues certificates and approvals for individuals and entities to operate in the national airspace system) and Aircraft Certification Service (which issues approvals to the designers and manufacturers of aircraft and aircraft parts and equipment). According to industry stakeholders, variation in FAA's interpretation of standards for certification and approval decisions is a result of factors related to performance-based regulations, which allow for multiple avenues of compliance, and the use of professional judgment by FAA staff and can result in delays and higher costs. Industry stakeholders and experts generally agreed that FAA's certification and approval processes contribute to aviation safety and work well most of the time, but negative experiences have led to costly delays for the industry. Industry stakeholders have also raised concerns about the effects of process inefficiencies on the implementation of the Next Generation Air Transportation System (NextGen)--the transformation of the U.S. national airspace system from a ground-based system of air traffic control to a satellite-based system of air traffic management. They said that the processes take too long and impose costs that discourage aircraft operators from investing in NextGen equipment. FAA has taken actions to improve the certification and approval processes, including hiring additional inspectors and engineers and increasing the use of designees and delegated organizations--private persons and entities authorized to carry out many certification activities. Additionally, FAA is working to ensure that its processes are being followed and improved through a quality management system, which provides a mechanism for stakeholders to appeal FAA decisions. However, FAA does not know whether its actions under the quality management system are achieving the intended goals of reducing inconsistencies and increasing consistency and fairness in the agency's application of regulations and policies because FAA does not have outcome-based performance measures and a continuous evaluative process that would allow it to determine progress toward these goals. Without ongoing information on results, FAA managers do not know if their actions are having the intended effects. GAO recommends that FAA develop a continuous evaluative process with measurable performance goals to determine the effectiveness of the agency's actions to improve its certification and approval processes. DOT did not comment on the recommendations but provided technical comments, which were included as appropriate.
You are an expert at summarizing long articles. Proceed to summarize the following text: The overall purpose of FFMIA is to ensure that agency financial management systems comply with federal financial management systems requirements, applicable accounting standards, and the SGL in order to provide uniform, reliable, and thus more useful financial information. With such information, government leaders will be better positioned to help invest scarce resources, reduce costs, oversee programs, and hold agency managers accountable for the way they run government programs. The 1990 CFO Act laid the legislative foundation for the federal government to provide taxpayers, the nation’s leaders, and agency program managers with reliable financial information through audited financial statements. Under the CFO Act, as expanded by the Government Management Reform Act of 1994, 24 major agencies, which account for 99 percent of federal outlays, are required to annually prepare organizationwide audited financial statements beginning with those for fiscal year 1996. Table 1 lists the 24 CFO agencies and their reported fiscal year 1996 outlays. Financial audits address the reliability of information contained in financial statements, provide information on the adequacy of systems and controls used to ensure accurate financial reports and safeguard assets, and report on agencies’ compliance with laws and regulations. Building on the CFO Act audits, FFMIA requires, beginning with the fiscal year ended September 30, 1997, that each of the 24 CFO agencies’ financial statement auditors report on whether the agency’s financial management systems substantially comply with federal financial management systems requirements, applicable accounting standards, and the SGL. The financial management systems policies and standards prescribed for executive agencies to follow in developing, operating, evaluating, and reporting on financial management systems are defined in OMB Circular A-127, “Financial Management Systems,” which was revised in July 1993. Circular A-127 references the series of publications entitled Federal Financial Management Systems Requirements, issued by the Joint Financial Management Improvement Program (JFMIP), as the primary source of governmentwide requirements for financial management systems. JFMIP initially issued Core Financial System Requirements, the first document in its Federal Financial Management Systems Requirements series, in January 1988. An updated version reflecting changes in legislation and policies was released in September 1995. This document establishes the standard requirements for a core financial system to support the fundamental financial functions of an agency. Framework for Federal Financial Management Systems was published in January 1995 and describes the basic elements of a model for an integrated financial management system in the federal government, how these elements should relate to each other, and specific considerations in developing and implementing such an integrated system. In this regard, FFMIA defines financial management systems as “financial systems” and the financial portions of “mixed systems” necessary to support financial management, including automated and manual processes, procedures, controls, data, hardware, software, and support personnel dedicated to the operation and maintenance of the system. Other documents in the JFMIP series provide requirements for specific types of systems covering personnel/payroll, travel, seized/forfeited asset, direct loan, guaranteed loan, and inventory systems. Table 2 lists the publications in the Federal Financial Management System Requirements Series and their issue dates. In addition to these eight documents, JFMIP is developing additional systems requirements for managerial cost accounting. This document was issued as an exposure draft in April 1997. Federal accounting standards, which agency CFOs use in preparing financial statements and in developing financial management systems, are recommended by FASAB. In October 1990, the Secretary of the Treasury, the Director of OMB, and the Comptroller General established FASAB to recommend a set of generally accepted accounting standards for the federal government. FASAB’s mission is to recommend reporting concepts and accounting standards that provide federal agencies’ financial reports with understandable, relevant, and reliable information about the financial position, activities, and results of operations of the U.S. government and its components. FASAB recommends accounting standards after considering the financial and budgetary information needs of the Congress, executive agencies, other users of federal financial information, and comments from the public. The Secretary of the Treasury, the Director of OMB, and the Comptroller General then decide whether to adopt the recommended standards. If they do, the standards are published by OMB and GAO and become effective. As discussed further in the section “Status of Federal Accounting Standards,” this process has resulted in issuance of two statements of accounting concepts and eight statements of accounting standards. GAO published these concepts and standards in FASAB Volume 1, Original Statements, Statements of Federal Financial Accounting Concepts and Standards, in March 1997. In 1984, OMB tasked an interagency group to develop a standard general ledger chart of accounts for governmentwide use. The resulting SGL was established and mandated for use by the Department of the Treasury in 1986. Further, OMB Circular A-127, Financial Management Systems, requires agencies to record financial events using the SGL at the transaction level. The SGL provides a uniform chart of accounts and pro forma transactions used to standardize federal agencies’ financial information accumulation and processing, enhance financial control, and support budget and external reporting, including financial statement preparation. Use of the SGL improves data stewardship throughout the government, enabling consistent analysis and reporting at all levels within the agencies and at the governmentwide level. It is published in the Treasury Financial Manual. The Department of the Treasury’s Financial Management Service is responsible for maintaining the SGL. As part of a CFO agency’s annual audit, the auditor is to report whether the agency’s financial management systems substantially comply with federal financial management systems requirements, applicable accounting standards, and the SGL. If the auditor determines that an agency’s financial management systems do not substantially comply with these requirements, the act requires that the audit report (1) identify the entity or organization responsible for management and oversight of the noncompliant financial management systems, (2) disclose all facts pertaining to the failure to comply, including the nature and extent of the noncompliance, the primary reason or cause of the noncompliance, the entity or organization responsible for the noncompliance, and any relevant comments from responsible officers or employees, and (3) include recommended corrective actions and proposed time frames for implementing such actions. The act assigns to the head of an agency responsibility for determining, based on a review of the auditor’s report and any other relevant information, whether the agency’s financial management systems comply with the act’s requirements. This determination is to be made no later than 120 days after the receipt of the auditor’s report, or the last day of the fiscal year following the year covered by the audit, whichever comes first. If the head of an agency determines that the agency does not comply with the act’s requirements, the agency head, in consultation with the Director of OMB, shall establish a remediation plan that will identify, develop, and implement solutions for noncompliant systems. The remediation plan is to include corrective actions, time frames, and resources necessary to achieve substantial compliance with the act’s requirements within 3 years of the date the noncompliance determination is made. If, in consultation with the Director of OMB, the agency head determines that the agency’s financial management systems are so deficient that substantial compliance cannot be reached within 3 years, the remediation plan must specify the most feasible date by which the agency will achieve compliance and designate an official responsible for effecting the necessary corrective actions. Under the FFMIA process, the auditor’s and the agency head’s determinations of compliance may differ. In such situations, the Director of OMB will review the differing determinations and report on the findings to the appropriate congressional committees. The act also contains additional reporting requirements. OMB is required to report each year on the act’s implementation. In addition, each inspector general (IG) of the 24 CFO agencies is required to report to the Congress, as part of its semiannual report, instances in which an agency has not met the intermediate target dates established in its remediation plan and the reasons why. Efforts are underway to implement FFMIA and improve the quality of financial management systems. OMB recently issued implementation guidance in a memorandum dated September 9, 1997, for agencies and auditors to use in assessing compliance with FFMIA. This is interim guidance to be used in connection with audits of federal financial statements for fiscal year 1997. OMB’s guidance emphasizes implementation of federal financial management improvements by fully describing in separate sections each of the requirements under the act, which are (1) federal financial management systems requirements, (2) applicable federal accounting standards, and (3) the SGL at the transaction level. Each section begins by identifying and discussing the executive branch policy documents that previously established the requirement. Information is also provided on the meaning of substantial compliance and the types of indicators that should be used in assessing whether an agency is in substantial compliance. For example, one indicator of substantial compliance with financial management systems requirements would include financial management systems that meet the requirements of OMB Circular A-127. Likewise, an indicator of substantial compliance with financial accounting standards would include an agency that has no material weaknesses in internal controls that affect its ability to prepare auditable financial statements and related disclosures in accordance with federal accounting standards. Information is also provided for the auditor to consider in evaluating and reporting audit results, as well as other reporting requirements. The guidance states that the auditor shall use professional judgment in determining substantial compliance with FFMIA. Further, substantial noncompliance in any one or more of the three requirements of FFMIA would result in substantial noncompliance with FFMIA. For example, an agency could have an unqualified opinion on its financial statements indicating that the financial statements are prepared in accordance with applicable federal accounting standards, yet have financial management systems that are not in substantial compliance with financial management systems requirements. This situation would preclude the agency from being in substantial compliance with FFMIA. Finally, the guidance also directs auditors to follow the reporting guidance, with respect to compliance, contained in OMB Bulletin 93-06. We have been discussing with OMB some refinements to this bulletin, with particular focus on four areas: (1) clarifying, based on information provided in OMB’s implementation guidance, that the auditor should perform tests of the reporting entity’s compliance with the requirements of FFMIA, (2) including in the reporting entity’s management representation letter a representation about whether the reporting entity’s financial management systems are in substantial compliance with FFMIA requirements, (3) clarifying that the auditor’s report on the reporting entity’s compliance with applicable laws and regulations state that the auditor performed sufficient compliance tests of FFMIA requirements to report whether the entity’s financial management systems comply substantially with FFMIA requirements, and (4) separately stating in the auditor’s report whether such tests disclosed any instances in which the reporting entity’s financial management systems did not comply substantially with FFMIA requirements. Finally, we have discussed with OMB the requirement in the act, that if the reporting entity does not comply substantially with FFMIA requirements, the auditor’s report needs to identify the entity or organization responsible for the financial management systems that have been found not to comply with FFMIA requirements; disclose all facts pertaining to the noncompliance, including the nature and extent of the noncompliance, such as the areas in which there is substantial but not full compliance; the primary reason or cause of the noncompliance; the entity or organization responsible for the noncompliance; and any relevant comments from reporting entity management or employee responsible for the noncompliance; and state recommended remedial actions and the time frames to implement such actions. We are also exploring other tools to assist the CFO and IG communities in implementing OMB’s interim guidelines. OMB plans to review its interim guidelines and replace them during 1998 with revisions to appropriate OMB policy documents. Agencies are also taking steps to improve the quality of their financial management systems. According to the CFO Council’s and OMB’s Status Report on Financial Management Systems, dated June 1997, agencies are reporting plans to replace or upgrade operational applications within the next 5 years. For applications that are now under development or in the process of a phased implementation, reported plans are also in place to fully implement the SGL at the transaction level and comply with federal financial management system requirements. This report indicates that many agencies are also reporting considering greater use of commercial off-the-shelf software, cross-servicing, and outsourcing as they seek more effective ways to improve their financial management systems. Successful implementation of these efforts will be instrumental in achieving future compliance with FFMIA requirements. Agencies face significant challenges in achieving substantial compliance with the act’s requirements in the near future. The majority of agencies did not receive an unqualified opinion on their fiscal year 1996 financial statements. In addition, fiscal year 1996 financial management systems inventory data, self-reported by agencies and summarized in the CFO Council’s and OMB’s June 1997 Status Report on Federal Financial Management Systems, reveal that the majority of agencies’ financial systems did not comply with federal financial management systems requirements or the SGL at the transaction level prior to FFMIA’s effective date. An inability to prepare timely and accurate financial statements suggests that agencies find it difficult to effectively implement applicable federal accounting standards. A financial statement audit provides a meaningful measure of compliance with applicable federal accounting standards. An unqualified opinion is one of several indications that the agency’s financial management systems support the preparation of accurate and reliable financial statements with minimal manual intervention. However, for fiscal year 1996, only 6 of the 24 CFO agencies received unqualified opinions on their organizationwide financial statements. Further, according to OMB’s Federal Financial Management Status Report & Five-Year Plan, only 13 CFO agencies anticipate being able to obtain unqualified opinions on their fiscal year 1997 financial statements. Our past audit experience has indicated that numerous agencies’ financial management systems do not maintain and generate original data to readily prepare financial statements. Consequently, many agencies have relied on ad hoc efforts and manual adjustments to prepare financial statements. Such procedures can be time-consuming, produce inaccurate results, and delay the issuance of audited statements. In addition, agencies’ lack of reliable and consistent financial information on a regular, ongoing basis undermines federal managers’ ability to effectively evaluate the cost and performance of government programs and activities. Also, the current status of federal financial management systems portends potential problems in agencies complying fully with federal financial management systems requirements and the SGL as mandated by the act. When FFMIA was enacted, federal agencies lacked many of the basic systems needed to provide uniform and reliable financial information. Agencies are still struggling to comply with governmentwide standards and requirements, although they have recently exhibited some progress in implementing and maintaining financial management systems that comply with federal financial system requirements and the SGL. For instance, according to the CFO Council’s and OMB’s FY 1995 Status Report on Federal Financial Management Systems, issued in June 1996, only 29 percent of agencies’ financial management systems were reported to be in compliance with JFMIP federal financial management system requirements. In addition, agencies had fully implemented the SGL in only 40 percent of the operational applications to which they reported it applied. The fiscal year 1996 status report, issued in June 1997, showed some improvement, with 36 percent of agencies’ financial management systems reported as complying with federal financial management system requirements and full SGL implementation reported in 45 percent of the applications to which agencies reported it applied. However, these statistics indicate that the majority of agencies’ financial management systems still lacked compliance with financial management systems requirements and full SGL implementation in fiscal year 1996. Using a due process and consensus building approach, FASAB has successfully provided the federal government with an initial set of accounting standards. To date, FASAB has recommended, and OMB and GAO have issued, two statements of accounting concepts and eight statements of accounting standards with various effective dates ranging from fiscal year 1994 through fiscal year 1998. These concepts and standards, which are listed in table 3, underpin OMB’s guidance to agencies on the form and content of their financial statements. In addition to the two concepts and eight standards, FASAB is working on standards relating to management’s discussion and analysis of federal financial statements, social insurance, the cost of capital, natural resources, and computer software costs. The objectives of federal financial reporting are to provide users with information about budgetary integrity, operating performance, stewardship, and systems and controls. With these as the objectives of federal financial reporting, the federal government can better develop new reporting models that bring together program performance information with audited financial information and provide congressional and other decisionmakers with a more complete picture of the results, operational performance, and the costs of agencies’ operations. FFMIA is intended to improve federal accounting practices and increase the government’s ability to provide credible and reliable financial information. Such information is important in providing a foundation for formulating budgets, managing government program operations, and making difficult policy choices. Efforts are underway both in assisting agencies in implementing the act’s requirements and to assist auditors in measuring compliance with the act’s requirements. However, long-standing problems with agencies’ financial management systems suggests that agencies will have difficulty, at least in the short term, achieving compliance with the act’s requirements. Successful implementation of the act and resulting financial management improvements depend on the united effort of all organizations involved, including agency CFOs, IGs, OMB, the Department of the Treasury, and GAO. In performing our work, we evaluated OMB’s implementation guidance for FFMIA. In addition, we reviewed the CFO Council’s and OMB’s June 1997 and 1996 Status Report on Federal Financial Management Systems and OMB’s June 1997 Federal Financial Management Status Report & Five-Year Plan. We did not verify or test the reliability of the data in these reports. Further, we reviewed fiscal year 1996 audit results for the 24 CFO agencies and applicable federal accounting standards. We conducted our work from July through September 1997 at GAO headquarters in Washington, D.C. in accordance with generally accepted government auditing standards. We provided a draft of this report to OMB and Treasury and they generally concurred with its contents. We have incorporated their comments as appropriate. We are sending copies of this letter to the Chairmen and Ranking Minority Members of the Subcommittee on Oversight of Government Management, Restructuring, and the District of Columbia, Senate Committee on Governmental Affairs; the Subcommittee on Government Management, Information, and Technology, House Committee on Government Reform and Oversight; other interested congressional committees; the Director, Office of Management and Budget; the Secretary of the Treasury; heads of the 24 CFO agencies; agency CFOs and IGs; and other interested parties. We will also make copies available to others upon request. This letter was prepared under the direction of Gloria L. Jarmon, Director, Civil Audits/Health and Human Services, who may be reached at (202) 512-4476 if you or your staffs have any questions. Major contributors to this letter are listed in appendix I. Deborah A. Taylor, Assistant Director Maria Cruz, Senior Audit Manager Anastasia Kaluzienski, Audit Manager The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO provided information on: (1) the requirements of the Federal Financial Management Improvement Act (FFMIA) of 1996; (2) efforts under way to implement the act; (3) challenges that agencies face in achieving full compliance with those requirements; and (4) the status of federal accounting standards. GAO noted that: (1) it is too early to tell the extent to which the 24 agencies named in the Chief Financial Officers (CFO) Act will be in compliance with FFMIA requirements for fiscal year 1997 because auditor reports discussing the results of the fiscal year 1997 financial statement audits will generally not be available until March 1, 1998, which is the statutory reporting deadline; (2) the Office of Management and Budget (OMB) and the CFO agencies have initiated efforts to implement the act's requirements and improve financial management systems; (3) although auditors performing financial audits under the CFO Act are not required to report on FFMIA compliance until March 1, 1998, prior audit results and agency self-reporting all point to significant challenges that agencies must meet in fully implementing systems requirements, accounting standards, and the U.S. Government Standard General Ledger; (4) regarding the adequacy of accounting standards, the Federal Accounting Standards Advisory Board (FASAB) has successfully developed a good initial set of accounting standards; (5) to date, FASAB has recommended, and OMB and GAO have issued, two statements of accounting concepts and eight statements of accounting standards tailored to the federal government's unique characteristics and special needs; and (6) OMB has integrated these concepts and standards into its guidance to agencies on the form and content of their financial statements.
You are an expert at summarizing long articles. Proceed to summarize the following text: The military services and VA have medical requirements that servicemembers must meet when leaving the military and applying for VA disability compensation. These requirements include a medical assessment; a service-specific separation exam, which is given to some servicemembers; and a VA C&P exam. The single separation exam program is designed to provide a single physical exam that can be used to meet the physical exam requirements of the military services and VA. In response to a 1994 memorandum from the Assistant Secretary of Defense for Health Affairs, all of the military services require a medical assessment of all servicemembers leaving the military, including those that retire or complete their tour of active duty. This assessment, which is used to evaluate and document the health of these servicemembers, consists of a standard two-page questionnaire asking servicemembers about their overall health, medical and dental histories, current medications, and other health-related topics. (See app. II for DOD’s medical assessment form—DD Form 2697.) Military medical personnel, who could include a physician, a physician’s assistant, or a nurse practitioner, are required to review the questionnaire with the servicemember. If the questionnaire indicates the presence of an illness, injury, or other medical problem, the reviewer is required to ensure that the servicemember’s medical or dental records document the problem. In addition, depending on the servicemember’s responses or based on the reviewer’s judgment that additional information is needed, the health assessment could result in a physical exam—one focused on a particular health issue or issues in order to supplement information disclosed on the questionnaire. Furthermore, the medical assessment asks if the servicemember intends to file a claim for disability with VA. Servicemembers who answer “yes” on the assessment form will be given a clinically appropriate assessment or exam if the servicemember’s last physical exam received during active duty is more than 12 months old or if new symptoms have appeared since the last active duty exam. In addition, the Army, Navy, Air Force, and Marines require some of their servicemembers to undergo separation exams when they leave the military. Separation exams consist of a clinical evaluation by a medical provider and could include various diagnostic tests, such as a urinalysis, a hearing test, and a vision test. Separation exams, as well as other physical exams the military services conduct, are documented on a three-page standard DOD form. (See app. III for DOD’s report of medical examination—DD Form 2808.) According to DOD, the average cost for a physical exam given by the military services is about $125, exclusive of any diagnostic tests that may also be conducted. The requirements determining which servicemembers must receive separation exams vary by military service and other factors. The Army requires that its retirees receive separation exams, although the Army does not usually require this for servicemembers who are completing their tours of active duty. The other military services do not require separation exams for most servicemembers, except for those whose last physical exam or assessment they received during active duty is out of date. (See table 1 for each military service’s medical evaluation requirements.) Further, all of the military services also require separation exams for certain occupational specialties. For example, the military services require separation exams for servicemembers who have worked with hazardous materials. Finally, any servicemember can request and receive a separation exam. Requirements for separation exams may be affected by planned changes to physical exam requirements for active duty servicemembers. The Army and Navy plan to change their physical exam requirements for servicemembers during active duty—replacing routine physical exams with periodic health assessments, thereby moving closer to the Air Force’s requirements for active duty servicemembers. In September 2003, the Armed Forces Epidemiology Board (AFEB) issued a report that concluded that annual health assessments, as currently administered by the Air Force to active duty servicemembers, should replace routine physical exams. According to their Surgeon General representatives, the Army and the Navy intend to change their regulations relating to periodic physical exams and to adopt the recommendations offered by the AFEB by 2005. This shift in requirements is in line with recommendations of the U.S. Preventive Services Task Force and many other medical organizations, which no longer advocate routine physical exams for adults—recommending instead a more selective approach to detecting and preventing health problems. Some servicemembers who leave the military file for VA disability benefits, which could include priority access to VA health care as well as monthly payments for disabilities, diseases, or injuries incurred or aggravated during active military service. VA requires evidence of military service to confirm eligibility for these benefits, and the department uses the C&P exam to establish a disability rating, which helps determine the amount of compensation a veteran receives. Veterans retain the option of initiating claims at any time after leaving the military, even if they did not state their intention to do so on the medical assessment form completed when they left military service. A VA C&P exam is a physical exam used to determine a veteran’s degree of disability in support of claims for service-connected disability compensation. The exam obtains information on the veteran’s medical history and includes diagnostic and clinical tests, the scope of which depend on what disabilities the veteran claims. For example, if a veteran claims a disability for a knee injury, VA would require a comprehensive orthopedic exam to determine the percent of movement that has been lost due to the knee injury. Veterans may claim multiple disabilities—all of which must be evaluated for disability rating purposes. In general, VA’s C&P exam is more comprehensive and detailed than the military services’ separation exams, as military service exams are intended to document continued fitness for duty, whereas the purpose of the VA C&P exam is to document disability or loss of function regardless of its impact on fitness for duty. VA physicians who conduct the C&P exam must evaluate the extent of a veteran’s physical limitations and determine their impact on the veteran’s future employment for compensation purposes. VA physicians usually conduct C&P exams at VA Medical Centers, although since 1996 VA has had authority to use civilian physicians to provide C&P exams at 10 VA regional offices. In addition, VA physicians may provide C&P exams at some military medical facilities. According to VA officials, the average cost of VA’s C&P exam, exclusive of any diagnostic tests, is about $400 when conducted by either VA or by VA’s contractor. In 1994, the Army and VA jointly initiated a pilot program for single separation exams at three Army installations. Each of the installations used a different approach when implementing the exam. At Fort Hood, Texas, a VA physician performed single separation exams at the Army’s military treatment facility. At Fort Knox, Kentucky, a sequential approach was used in which Army personnel performed some preliminary work, such as lab tests and optical exams, for servicemembers at the installation. Servicemembers were then transported to a local VA medical center, where VA physicians completed the single separation exams. At Fort Lewis, Washington, an Army physician performed the single separation exams at the military installation. The 1997 report on the pilot programs concluded that all of the approaches for single separation exams were successful and that, overall, they eliminated redundant physical exams and medical procedures, decreased resource expenditures, increased the timeliness of VA’s disability rating decisions, and improved servicemembers’ satisfaction. The report also recommended that single separation exam programs be expanded to include all military services. Based on the findings of the single separation exam pilot, VA’s Under Secretary for Health and DOD’s Acting Assistant Secretary of Defense for Health Affairs signed an MOU in 1998 directing local VA offices and military medical facilities to negotiate and implement individual MOUs for single separation exam programs. According to the MOU, VA and the military services should optimize available resources, including the use of both military and VA facilities and staff as appropriate. For example, because a servicemember applying for VA benefits would receive a single physical exam that meets VA C&P exam requirements—which are usually more extensive than the military services’ separation exam requirements— the MOU envisioned that VA medical personnel would perform most of the single separation exams. It also stated that the military services would provide VA with servicemembers’ medical records and lab and test results from active duty in order to avoid duplicative testing. Finally, the MOU acknowledged that in implementing single separation exam programs, negotiations between local VA and military officials would be necessary, because military installations and local VA offices and hospitals face resource limitations and competing mission priorities. These local level negotiations would be documented in individual MOUs. To implement the 1998 MOU, both VA and DOD issued department- specific guidance. In January 1998, both VA’s Under Secretary for Health and Under Secretary for Benefits distributed guidelines to VA regional offices and medical centers about completing the single separation exams in cooperation with the military services. In September 1998, DOD’s Assistant Secretary of Defense for Health Affairs issued a policy to the Assistant Secretaries for the Army, Navy, and Air Force stating that servicemembers who leave the military and intend to file a claim for VA disability benefits should undergo a single physical exam for the military services and VA. Since 1998, VA and the military services have collaborated to establish single separation exam programs using various approaches to deliver the exams, including those used in the original pilot program. However, while we were able to verify that the exams were being delivered at some installations, DOD, its military services, and VA either could not provide information or provided us with inaccurate information on program sites. Although VA reported that 28 of 139 BDD sites had programs in place as of May 2004, we found that 4 of the 8 sites we evaluated from VA’s list did not actually have a program in place. Nonetheless, VA and DOD leadership continue to encourage the establishment of single separation exam programs and have drafted a new MOA that contains a specific implementation goal to have programs in place at all of the BDD sites by December 31, 2004—an ambitious goal given the seemingly low rate of program implementation since 1998 and the lack of accurate information on existing programs. VA reported that as of May 2004, 28 of the 139 BDD sites had operating single separation exam programs. At these sites, VA officials told us, local VA and military officials have implemented the program using one of five approaches that met both the military services’ and VA’s requirements without duplication of effort. Three of the five approaches were developed during the 1994 pilot program—(1) military physicians providing the exams at military treatment facilities, (2) VA physicians providing the exams at military treatment facilities, and (3) a sequential approach wherein VA and the military service shared the responsibility of conducting consecutive components of a physical exam. In addition, VA officials reported a fourth approach that was being used, in which VA physicians delivered the single separation exam at VA hospitals, and a fifth approach, in which VA used a civilian contractor to deliver the exams. We evaluated the operation of the single separation exam programs at four of the military installations VA reported as having collectively conducted over 1,400 exams in 2003. These installations were conducting single separation exams using two of the approaches—either with VA’s contractor conducting the physical exam or as a sequential approach. (See table 2.) Overall, VA and military officials told us that both approaches worked in places where military officials and VA officials collaborated well together. At two Army installations—Fort Stewart and Fort Eustis—we found that VA used its civilian contractor to conduct C&P exams, which the Army then used to meet its separation exam requirements for servicemembers leaving the military. At the Fort Drum Army installation and Naval Station Mayport, local VA and military service officials collaborated to implement a sequential approach. At Fort Drum, the Army starts the single separation exam process by conducting hearing, vision, and other diagnostic testing. A VA physician subsequently completes the actual physical exam at the installation, which is then incorporated in the servicemember’s medical record. At Naval Station Mayport, a Navy corpsman starts the sequential process by reviewing the servicemember’s medical history, initiating appropriate paperwork, and scheduling the servicemember for an appointment with a VA physician. The VA physician then conducts a VA C&P exam at the installation and completes the paperwork to meet the Navy’s separation requirements. DOD and its military services do not adequately monitor where single separation exam programs have been established. DOD does not maintain servicewide information on the locations where single separation exam programs are operating. While the Army and the Air Force each provided a list of installations where officials claimed single separation exam programs were established, both lists included installations that we verified as not having a program in place. A Navy official told us that although the Navy attempted to identify the locations of single separation exam programs, its information was not accurate. In addition, while VA maintains a list of single separation exam programs, this list was not up to date. At our request, VA attempted to update their list and reported to us that in May 2004, 28 military installations with BDD programs also had single separation exam programs. At these sites, VA reported that over 11,000 single separation exams had been conducted in 2003. However, when we evaluated programs at 8 of these installations, we found that 4 of the installations did not actually have programs in place. (See table 3.) At these four military installations, the 2,075 exams reported as single separation exams were actually VA C&P exams that were used only by VA and not by the military services. We obtained the following information about these installations. At Fort Lee, local Army and VA officials told us that a single separation exam program was in place prior to our site visit. However, during a joint discussion with us, they realized that the local MOU, which was signed in April 2001, was not being followed and that the single separation exam program was no longer in operation. Nonetheless, local VA officials responsible for reporting on the program were unaware that the program was no longer operational. At Little Rock Air Force Base, we found that a single separation exam program was not in place even though there was an MOU, which local VA officials told us was signed in May 1998. During our initial discussions, local VA officials told us that the program was in operation. However, as they responded to VA headquarter’s inquiry to update their list of installations with single separation exam programs for us, local officials realized that the program was not in operation and had never existed despite the signed MOU. Nonetheless, this site was still included on the updated list of installations that VA provided to us. At Pope Air Force Base, local military officials told us that no single separation exam program was in place. Furthermore, a local VA official said that no MOU had been signed for the program at this installation. However, despite this, local VA officials mistakenly believed that installation officials were using the VA C&P exams to meet their separation requirements and that, as a result, single separation exams were being provided. Finally, at Marine Camp Lejeune, local military and VA officials told us that no single separation exams were being conducted even though there was an MOU, which was signed in 2001. When we met with the installation’s hospital commander, he told us that the hospital was not participating in the single separation exam program, and he was unaware of the existence of the MOU for this program. We also met with military officials at the Hadnot Branch Clinic, the installation’s busiest clinic in terms of separation physicals, and at the time of our review, this clinic was also not participating in the single separation exam program. Furthermore, local VA officials told us that they realized that the program was not in operation at the time of our visit—even though it was included on the list that VA updated for us. We also identified another military installation that had a single separation exam program—even though it was not included in VA’s list of installations with these programs. Regional VA officials told us—and we confirmed—that an MOU for a single separation exam program had been implemented at MacDill Air Force Base, Florida. At this installation, local military officials reported that 516 single separation exams were conducted in 2003. According to local VA and military officials, this installation employs a sequential approach wherein VA uses medical information from Air Force health assessments as well as any diagnostic tests that may have been conducted in conjunction with them to help complete C&P exams for servicemembers applying for VA disability compensation. As part of an overarching effort to streamline servicemembers’ transition from active duty to veterans’ status, VA and DOD continue to encourage the establishment of single separation exam programs and have drafted a national MOA, which is intended to supercede the 1998 MOU. Unlike the original MOU, the draft MOA contains a specific implementation goal— that VA and the military services establish single separation exam programs at each of the installations with BDD programs by December 31, 2004. The draft MOA also provides more detail about how the military services and VA will share servicemembers’ medical information to eliminate duplication of effort. For example, the MOA states that the military services will share the medical assessment forms along with any completed medical exam reports and pertinent medical test results with VA. Similarly, the MOA specifies that when VA conducts its C&P exam of servicemembers before they leave the military, this information should be documented in servicemembers’ military medical records. According to VA officials, the draft MOA extends the eligibility period for servicemembers to participate in the program by eliminating the previous requirement that servicemembers had to have a minimum number of days—usually 60—remaining on active duty. As a result, servicemembers may participate in the program when they have 180 days or less remaining on active duty. Aside from some specific additions, the general guidance in the draft MOA is consistent with the 1998 MOU. For example, the draft MOA delegates responsibility for establishing single separation exam programs to local VA and military installations, based on the medical resources—including physicians, laboratory facilities, examination rooms, and support staff— available to conduct the exams and perform any additional testing. The MOA also continues to provide flexibility that allows local officials to determine how the exams will be delivered—by VA, by VA’s contractor, or by DOD. According to VA, the draft MOA is expected to be signed by DOD’s Under Secretary of Defense for Personnel and Readiness and the Deputy Secretary of VA in November 2004. In contrast, the 1998 MOU was signed at lower levels of leadership within each department—DOD’s Acting Assistant Secretary of Defense for Health Affairs, who reports to the Under Secretary of Defense for Personnel and Readiness, and VA’s Under Secretary for Health, who reports to the Deputy Secretary of VA. Both VA and DOD officials told us that endorsement for the new draft MOA from higher-level leadership within the departments should facilitate the establishment of single separation exam programs. However, it will be difficult to determine where the program needs to be implemented without accurate program information with which to oversee and monitor these efforts—a critical deficiency in light of the MOA’s ambitious goal to establish the program at all BDD sites by December 31, 2004, and given the seemingly low rate of implementation at the 139 BDD sites. Several challenges impact the establishment of single separation exam programs. The primary challenge is that the military services do not usually require servicemembers to undergo a separation exam before leaving the military. In fiscal year 2003, the military services administered separation exams for an estimated one-eighth of servicemembers who left the military. Consequently, although individual servicemembers may benefit from single separation exams, the military services may not realize benefits from resource savings through eliminating or sharing responsibility for the separation exams. Another challenge to establishing these programs is that some military officials told us that they need their resources, such as space and medical personnel, for other priorities, including ensuring the health of active duty servicemembers. Furthermore, VA officials told us that because single separation exam programs require coordination between personnel from both VA and the military services, existing programs can be difficult to maintain because of routine rotations of military staff to different installations. Despite increased convenience for individual servicemembers, the military services may not benefit from single separation exam programs—designed to eliminate the need for two separate exams—because the military services usually do not require servicemembers who are leaving the military to have separation exams. In fiscal year 2003, the military services administered separation exams to an estimated 23,000, or one-eighth, of the servicemembers who left the military that fiscal year. However, this estimate may undercount the number of servicemembers who received separation exams. (See fig. 1.) Because the military services do not usually require separation exams, it is unlikely that servicemembers will receive physical exams from both the military and VA. At two Army installations without single separation exam programs, we found that relatively few servicemembers had received both a C&P exam from VA and a separation exam from the Army. From June 2002 through May 2004, 810 servicemembers received a VA C&P exam at Fort Gordon, and of these, 121 soldiers—about 15 percent—had also received a separation exam from the Army. Similarly, during June 2003 through May 2004, 874 servicemembers received a VA C&P exam at Fort Bragg, and of these only 38—about 4 percent—had also received a separation exam from the Army. Because the Army is the only military service to require separation exams for all retirees, we expected that the Army’s servicemembers were more likely those of the other military services to receive two physical exams. However, the small percentage of servicemembers that received both VA C&P exams and Army separation exams at these two installations suggests that the potential for resource savings by having single separation exams is likely small. In addition, some Air Force officials told us that they did not see a need to participate in single separation exam programs because of their health assessment requirements. For example, at Little Rock Air Force Base, officials told us that because the Air Force does not routinely require separation physicals for most servicemembers, it was not practical to use VA’s C&P physicals as single separation exams. The officials explained that VA’s C&P exams obtain more information than needed to meet the Air Force’s health assessment requirement and that using VA’s exam as a single separation exam would not be an efficient use of resources. The officials said that it would take military medical personnel too much time to review the VA C&P exams to identify the information the Air Force required. Similarly, officials at other Air Force installations we visited— Hurlburt Field, Langley Air Force Base, and Eglin Air Force Base—agreed that they would not benefit from a single separation exam program. However, we did find one Air Force installation—MacDill Air Force Base—where a single separation exam program was operational, demonstrating the feasibility of Air Force installations participating in single separation exam programs. Some military officials told us that they use their installations’ resources for other priorities than establishing single separation exam programs. Although the 1998 MOU encouraged the establishment of these programs for servicemembers leaving the military and filing VA disability claims, some local military officials told us that their installations did not currently have these programs because they decided to use available resources to support other efforts, such as conducting wartime training and ensuring that active duty servicemembers are healthy enough to perform their duties. For example, when we visited Fort Bragg we learned that the commander had initially agreed to provide space at his installation for a single separation exam program. However, the same space was committed to more than one function, and when the final allocation decision was made, other mission needs took priority. In addition, Nebraska VA officials told us that an existing single separation exam program was eliminated at Offutt Air Force Base because military medical personnel assigned to help VA physicians administer the exams were needed to focus on the health of active duty servicemembers at the installation. In addition, military officials explained that administering single separation exams that include VA’s C&P protocols are more time intensive for their staff and can involve more testing than the military’s separation exams. As a result, military officials are reluctant to assign resources, including facilities and staff, to this effort. Further, military officials explained that expending time and resources to train military physicians to administer single separation exams is not worthwhile because these physicians periodically rotate to other locations to fulfill their active duty responsibilities so other military physicians would have to be trained as replacements. Because single separation exam programs require coordination between personnel from both VA and the military services, staff changes or turnover can make it difficult to maintain existing programs. For example, during our visit to the Army’s Fort Lee, we found that the installation’s single separation program had stopped operating because of staff turnover. When the program was in operation, a sequential approach was used in which Army personnel conducted the initial part of the exams, which included medical history and diagnostic testing, and then shared servicemembers’ medical records with VA personnel at the VA hospital, where the single separation exams were completed. According to VA and Army officials, after the Army personnel changed, the installation no longer provided VA with the medical records. Further, VA officials told us that maintaining joint VA and DOD programs—such as single separation exam programs—is challenged by the fact that military staff, including commanders, frequently rotate. According to VA officials, some commanders do not want to continue agreements made by their predecessors so single separation programs must be renegotiated when the commands change. However, VA officials told us that the new draft MOA should help alleviate this challenge to program establishment because it states that local agreements between military medical facilities and VA regional offices will continue to be honored when leadership on either side changes. Since 1998, VA and DOD’s military services have attempted to establish single separation exam programs in order to prevent duplication and streamline the process for servicemembers who are leaving the military and intend to file a disability claim with VA. However, according to VA, fewer than 30 out of 139 military installations with BDD programs had single separation exam programs as of May 2004. To encourage more widespread program establishment, the departments have drafted a new national MOA with the goal of having programs in place at all BDD sites by December 31, 2004. Increasing the single separation exam program to all BDD sites will allow more servicemembers to benefit from its convenience. Yet, given the seemingly low rate of program implementation since 1998 and the challenges we identified in establishing and maintaining the program, it is unlikely that the programs will be established at about 100 more sites less than 2 months after the MOA becomes effective. Consequently, both departments will need to monitor program implementation to ensure that the new MOA is put into practice— especially since local agreements for single separation exam programs have not always resulted in the establishment and operation of such programs. To determine where single separation exam programs are established and operating, we recommend that the Secretary of VA and the Secretary of Defense develop systems to monitor and track the progress of VA regional offices and military installations in implementing these programs at BDD sites. We requested comments on a draft of this report from VA and DOD. Both agencies provided written comments that are reprinted in appendices IV and V. VA and DOD concurred with the report’s findings and recommendation. DOD also provided technical comments that we incorporated where appropriate. In commenting on this draft, VA stated that it has actions underway or planned that meet the intent of our recommendation. First, it has established an inspection process of BDD sites to determine compliance with procedures. In addition, VA noted that it has worked with DOD to revise the MOA for single separation exam programs and that it has instructed its regional offices to begin working with military treatment facilities to implement its provisions. Finally, VA said that VA’s and DOD’s joint strategic plan for fiscal year 2005 will include substantive performance measures to monitor the process of moving from active duty to veteran status through a streamlined benefits delivery process. In their written comments, DOD recognized the importance of a shared DOD and VA separation process and its benefits to servicemembers and noted the fact that both departments are working on an MOA to further encourage single separation exams. DOD also stated that the capability to monitor and track the progress of single separation exams has been hampered by the lack of a shared VA and DOD information technology system. However, DOD reported that VA is developing automated reporting tools and will be doing on-site visits to BDD sites, and VA and DOD will share information gathered from this system and site visits. We are sending copies of this report to the Secretary of Defense, the Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7119. Other contacts and staff acknowledgments are listed in appendix VI. To identify efforts by the Department of Veterans Affairs (VA) and the military services to establish single separation exam programs for servicemembers who plan to file VA disability claims, we reviewed pertinent legislation and obtained VA’s requirements for compensation and pension (C&P) exams. We also obtained service-specific requirements for periodic physical exams and health assessments and evaluations, especially those requirements pertaining to separating and retiring servicemembers. We obtained and reviewed relevant documentation about both departments’ efforts to establish single separation exam programs. We also interviewed officials from the office of the Assistant Secretary of Defense for Health Affairs, the military services’ Surgeons General, and VA. In addition, we obtained VA’s data on the number of disability claims and the cost data associated with conducting military physical exams and VA C&P exams. Based on our review of these data and subsequent discussions with agency officials, we determined that these data were sufficiently reliable for the purposes of this report. We obtained a list of 28 military installations that VA officials had identified as having single separation exam programs through a survey of their Benefits Delivery at Discharge (BDD) sites. We used this list to select 8 installations to learn how their programs operated. We did not verify whether the remaining 20 installations had single separation exam programs because such verification would have required a full evaluation of actual program operations at these locations. We also did not verify the number of installations with BDD sites or the numbers of single separation exams VA reported for these military installations. We selected installations that represented each of VA’s reported approaches for operating the single exam program—VA physicians conducting the exam at military installations, VA physicians conducting the exam at VA medical centers, Department of Defense (DOD) physicians conducting the exam, VA and DOD using a sequential approach for the exam, and VA’s civilian contractors delivering the exam. The installations we selected represented each of the four branches of the military service—Army, Navy, Air Force, and Marines—and all but one had more than 500 servicemembers leave in fiscal year 2003. We obtained the separation data from the Defense Manpower Data Centers’ (DMDC) Active Duty Military Personnel file on the number of servicemembers who left the military from various separation locations during fiscal year 2003. To assess the reliability of these data, we conducted logic tests to identify inconsistencies, reviewed existing information about it and the system that produced it, and interviewed an agency official who was knowledgeable about the data. We determined the data to be sufficiently reliable for the purposes of this report. From VA’s list we visited seven military installations—Marine Corps Base Camp Lejeune, North Carolina; Fort Eustis, Virginia; Fort Lee, Virginia; Fort Stewart, Georgia; Little Rock Air Force Base, Arkansas; Naval Station Mayport, Florida; and Pope Air Force Base, North Carolina. We also conducted telephone interviews with medical command and VA officials associated with Ft. Drum, New York. Further, we conducted a telephone interview with military and VA officials from MacDill Air Force Base, Florida, which has a single separation exam program but was not on VA’s list. At the installations we visited or contacted, we spoke with medical command officials and with VA officials responsible for the single separation exam program to discuss the different types of local agreements and procedures used for delivering single separation exams. We also reviewed the draft memorandum of agreement (MOA) related to single separation exam programs and interviewed officials from VA, the Office of the Assistant Secretary of Defense for Health Affairs, and the services’ Surgeons General to obtain information on VA and DOD officials’ efforts to draft and implement this MOA. To obtain information on the challenges associated with establishing single separation exam programs, we identified and visited military installations that did not have single separation exam programs. We used DMDC’s separation data for fiscal year 2003 to identify installations representing each of the military services—Army, Navy, Air Force, and Marines—that had more than 500 separations and were not reported by VA as having a single separation exam program. We also visited installations that were located in the same VA regions as installations we visited that VA had reported as having single separation exam programs. The seven military installations we visited were Marine Corps Air Station Cherry Point, North Carolina; Eglin Air Force Base, Florida; Fort Bragg, North Carolina; Fort Gordon, Georgia; Hurlbert Field Air Base, Florida; Langley Air Force Base, Virginia; and Naval Station Norfolk, Virginia. At these installations, we interviewed medical command officials and VA officials to learn whether single separation exam programs had been considered and what the challenges were to establishing them. For the two Army installations included in these seven selected installations—Fort Bragg, North Carolina and Fort Gordon, Georgia—we obtained both the separation exam data and C&P exam data for each installation to determine how many separating servicemembers from each installation received both an Army separation exam and a VA C&P exam. We chose Army installations for this analysis because duplicate service and C&P exams were more likely to occur due to the Army’s requirement that retirees receive a physical exam. After our review of the documentation and subsequent discussions with agency officials, we concluded that these data were sufficiently reliable for the purposes of this report. We also reviewed DOD’s separation exam data and discussed it with an agency official. Based on this information, we concluded that these data were sufficiently reliable for the purposes of this report although it may understate the number of separation exams because some may have been identified more generally as physical exams. To obtain additional information on the challenges to establishing single separation exam programs, we called or visited VA regional offices in 16 locations—Arkansas, California (three regions), Georgia, Florida, Kentucky, Nebraska, New York, North Carolina, Oklahoma, South Carolina, Texas (two regions), Virginia, and Washington—and talked with officials responsible for initiating and implementing these programs. We selected six of these regional offices because they were already involved in establishing single separation exam programs at the eight military installations we selected from VA’s list. We asked these officials about the challenges they encountered when trying to establish these programs at other installations in their regions. We also interviewed officials from the three VA regional offices involved in the pilot program for single separation exams. We talked with officials from seven additional regional offices that had responsibility for military installations with more than 500 separations during fiscal year 2003 to determine how they established programs in their regions and problems they encountered when programs could not be established. We performed our work from January 2004 through November 2004 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the VA November 1, 2004, letter. 1. We used VA’s May 2004 updated list to select our sites, and we found that it contained information that was both incomplete and inaccurate. The list included installations where we did not find single separation exam programs. It also omitted one installation where we found a single separation exam program. 2. We agree that individual servicemembers will benefit from single separation exam programs and have added information to the body of the report to reflect this. 3. We modified this statement as follows: “In general, VA’s C&P exam is more comprehensive and detailed than the military services’ separation exams, as military service exams are intended to document continued fitness for duty, whereas the purpose of the VA C&P exam is to document disability or loss of function regardless of its impact on fitness for duty.” 4. Although VA believed the C&P exam was being used for separation purposes at Pope Air Force Base, it was not. As we reported, VA and DOD had not signed an MOU for a single separation exam program at this installation, and the Air Force was clear that it was not using the C&P exam for separation purposes. 5. While Camp Lejeune’s Hadnot Branch Clinic may currently be conducting single separation exams, at the time of our visit in June 2004, the physician at the Hadnot Clinic told us he was not using VA’s C&P exams for servicemembers’ separation exams. In September 2004, we confirmed this information with the clinic physician. In addition to those named above, key contributors to this report were Krister Friday, Cywandra King, Raj Premakumar, Allan Richardson, and Julianna Williams.
Servicemembers who leave the military and file disability claims with the Department of Veterans Affairs (VA) may be subject to potentially duplicative physical exams in order to meet requirements of both the Department of Defense's (DOD) military services and VA. To streamline the process for these servicemembers, the military services and VA have attempted to coordinate their physical exam requirements by developing a single separation exam program. In 1998, VA and DOD signed a memorandum of understanding (MOU) instructing local units to establish single separation exam programs. This report examines (1) VA's and the military services' efforts to establish single separation exam programs, and (2) the challenges to establishing single separation exam programs. To obtain this information, GAO interviewed VA and military service officials about establishing the program; evaluated existing programs at selected military installations; and visited selected installations that did not have programs. Since 1998, VA and the military services have collaborated to establish single separation exam programs. However, while we were able to verify that the program was being delivered at some military installations, DOD, its military services, and VA either could not provide information on program locations or provided us with inaccurate information. As of May 2004, VA reported that 28 military installations had single separation exam programs that used one of five basic approaches to deliver an exam that met both VA's and the military services' requirements. However, when we evaluated 8 of the 28 installations, we found that 4 of the installations did not actually have programs in place. Nonetheless, VA and DOD leadership continue to encourage the establishment of single separation exam programs and have recently drafted a new memorandum of agreement (MOA) that is intended to replace the 1998 MOU. Like the original MOU, the draft MOA delegates responsibility for establishing single separation exam programs to local VA and military installations, depending on available resources. However, the draft MOA also contains a specific implementation goal that selected military installations should have single separation exam programs in place by December 31, 2004. This would require implementation at 139 installations--an ambitious plan given the seemingly low rate of program implementation since 1998 and the lack of accurate information on existing programs. Several challenges impede the establishment of single separation exam programs. The predominant challenge is that the military services may not benefit from a program designed to eliminate the need for two separate physical exams because they usually do not require that servicemembers receive a separation exam. As of August 2004, only the Army had a general separation exam requirement for retiring servicemembers. The other military services primarily require separation exams when the servicemember's last physical exam or medical assessment received during active duty is no longer considered current. In fiscal year 2003, only an estimated 13 percent of servicemembers who left the military received a separation exam. Consequently, the military services may not realize resource savings by eliminating or sharing responsibility for this exam. According to some military officials, another challenge to establishing single separation exam programs is that resources, such as facility space and medical personnel, are needed for other priorities, such as ensuring that active duty servicemembers are healthy enough to perform their duties. Additionally, because single separation exam programs require coordination between personnel from both VA and the military services, military staff changes, including those due to routine rotations, can make it difficult to maintain existing programs.
You are an expert at summarizing long articles. Proceed to summarize the following text: The National Flood Insurance Act of 1968 established NFIP as an alternative to providing direct disaster relief after floods. NFIP, which makes federally backed flood insurance available to residential property owners and businesses, was intended to reduce the government’s escalating costs for repairing flood damage. Floods are the most common and destructive natural disaster in the United States; however, homeowners’ insurance generally excludes flooding. Because of the catastrophic nature of flooding and the inability to adequately predict flood risks, private insurance companies historically have been largely unwilling to underwrite and bear the risk resulting from providing primary flood insurance coverage. Under NFIP, the federal government assumes the liability for the insurance coverage and sets rates and coverage limitations, while the private insurance industry sells the policies and administers the claims. NFIP offers two types of flood insurance premiums to property owners who live in participating communities: subsidized and full-risk. The National Flood Insurance Act of 1968 authorized NFIP to offer subsidized premiums to owners of certain properties. These subsidized rates are not based on flood risk and, according to FEMA, represent only about 40-45 percent of the full flood risk. Congress originally mandated the use of subsidized premiums to encourage communities to join the program and mitigate concerns that charging rates that fully and accurately reflected flood risk would be burdensome to some property owners. Even with highly discounted rates, subsidized premiums are, on average, higher than full-risk premiums. The premiums are higher because subsidized structures built before Flood Insurance Rate Maps (FIRM) became available generally are more prone to flooding (that is, riskier) than other structures. In general, pre-FIRM properties were not constructed according to the program’s building standards or were built without regard to base flood elevation—the level relative to mean sea level at which there is a 1 percent or greater chance of flooding in a given year. Potential policyholders can purchase flood insurance to cover both buildings and contents for residential and commercial properties. NFIP’s maximum coverage for residential policyholders is $250,000 for building property and $100,000 for contents. This coverage includes replacement value of the building and its foundation, electrical and plumbing systems, central air and heating, furnaces and water heater, and equipment considered part of the overall structure of the building. Personal property coverage includes clothing, furniture, and portable electronic equipment. For commercial policyholders, the maximum coverage is $500,000 per unit for buildings and $500,000 for contents (for items similar to those covered under residential policies). NFIP largely has relied on the private insurance industry to sell and service policies. In 1983, FEMA established the Write-Your-Own (WYO) program. Private insurers become WYOs by entering into an arrangement with FEMA to issue flood policies in their own name. WYOs adjust flood claims and settle, pay, and defend claims but assume no flood risk. Insurance agents from these companies are the main point of contact for most policyholders. WYOs issue policies, collect premiums, deduct an allowance for commission and operating expenses from the premiums, and remit the balance to NFIP. In most cases, insurance companies hire subcontractors—flood insurance vendors—to conduct some or all of the day-to-day processing and management of flood insurance policies. When flood losses occur, policyholders report them to their insurance agents, who notify the WYOs. The companies review the claims and process approved claims for payment. FEMA reimburses the WYOs for the amount of the claims plus expenses for adjusting and processing the claims, using rates that FEMA establishes. As of September 2012, about 85 WYOs accounted for about 85 percent of the more than 5.5 million policies in force. NFIP was added to GAO’s High-Risk List in 2006 due to losses from the 2005 hurricanes and the financial exposure the program created for the federal government. Until 2004, NFIP was able to cover most of its claims with premiums it collected and occasional loans from the U.S. Treasury (Treasury) that it repaid. However, after the 2005 hurricanes— primarily Hurricane Katrina—the program borrowed $16.8 billion from Treasury to cover the unprecedented number of claims. In prior work we found that NFIP, as it was then structured, was not likely to generate sufficient revenues to repay this amount. NFIP since has received additional borrowing authority in the amount of $9.7 billion to cover claims for Superstorm Sandy. As of July 31, 2013, the program owed Treasury approximately $24 billion. NFIP’s financial condition highlights structural weaknesses in program funding—primarily its rate structure. By design, NFIP does not operate for profit. Instead, the program must meet a public policy goal—to provide flood insurance in flood-prone areas to property owners who otherwise would not be able to obtain it. NFIP generally is expected to cover its claim payments and operating expenses with the premiums it collects. However, subsidized policies have been a financial burden on the program because of their relatively high losses and premium rates that are not actuarially based. As discussed previously, subsidized policies are associated with structures more prone to flood damage (either because of the way they were built or their location). As a result, the annual amount that NFIP collects in both full-risk and subsidized premiums is generally not enough to cover its operating costs, claim payments, and principal and interest payments to Treasury, especially in years of catastrophic flooding. This arrangement results in much of the financial risk of flooding being transferred to the federal government and ultimately the taxpayer. The Biggert-Waters Flood Insurance Reform Act of 2012 (Biggert-Waters Act) addressed some of the structural challenges that have contributed to the program’s financial instability.policies will not receive subsidized premium rates, subsidies on existing For example, new flood insurance policies for many other properties will be phased out, and policies for properties that are remapped to a higher risk level will be subject to higher premium rates. In addition the Biggert-Waters Act requires FEMA to implement other changes to its rate-setting process, including building a reserve fund and updating maps used to set rates to reflect relevant information on topography, long-term erosion of shorelines, future changes in sea levels, and the intensity of hurricanes. While these changes may help increase NFIP’s long-term financial stability, the program still faces challenges in implementing the changes and their ultimate effect is not yet known. Furthermore, weaknesses in NFIP management and operations, including financial reporting processes and internal controls, strategic and human capital planning, and oversight of contractors, also have placed the program at risk. For example, in 2011 we found that FEMA had not developed goals, objectives, or performance measures for NFIP. In addition, FEMA faces challenges modernizing NFIP’s insurance policy and claims management system. As a result, we made recommendations to improve the effectiveness of FEMA’s planning and oversight efforts for NFIP; improve FEMA’s policies and procedures for achieving NFIP’s goals; and increase the usefulness and reliability of NFIP’s flood insurance policy and claims processing system. While FEMA agreed with our recommendations and has taken some steps to address them, continued attention to these issues is vital and additional steps are needed to address the concerns we have identified in the past. The Biggert-Waters Act mandates that GAO conduct a number of studies related to actual and potential changes to NFIP, including analyses of remaining subsidized properties, and the effect of increasing coverage limits or adding coverage options. In one of our studies responding to these mandates, of remaining subsidized properties, we estimated that with the changes in the Biggert-Waters Act approximately 438,000 policies are no longer eligible for subsidies, including about 345,000 nonprimary residential policies, about 87,000 business policies, and about 9,000 single-family, severe-repetitive-loss policies.the approximately 715,000 remaining subsidized policies are expected to be eliminated over time. Under the act, most remaining subsidized policies no longer would be eligible for subsidies if NFIP coverage lapsed or the properties were sold or substantially damaged. We estimated that with implementation of the provisions addressing sales and coverage lapses, the number of subsidized policies could decline by almost 14 percent per year. At that rate, the number of subsidized policies would be reduced by 50 percent in approximately 5 years. After about 14 years, fewer than 100,000 subsidized policies would remain. However, the actual outcomes and time required for subsidies to be reduced could vary depending on the behavior of policyholders and the actual rate of sales and coverage lapses. In terms of characteristics, we found that the geographic distribution of remaining subsidized policies was similar to the distribution of all NFIP policies. Other characteristics we analyzed— indicators of home value and owner income—were different for the policies that continue to be eligible for subsidized premium rates compared to those with full-risk rates. In particular, counties with higher home values and income levels tended to have larger percentages of remaining subsidized policies than policies with full-risk rates. In our July 2013 report on subsidized policies, we identified three broad options that could help address the financial impact of remaining subsidized policies on the program, but the advantages and disadvantages of each would need to be considered and action would be required from both Congress and FEMA. These options are not mutually exclusive and may be used together to reduce the financial impact of subsidized policies on NFIP. The way in which an option is implemented (such as more aggressively or gradually) also can produce different effects in terms of policy goals and thus change the advantages and disadvantages. Adjust the pace of eliminating subsidies. Accelerating the elimination of subsidies could improve NFIP’s financial stability by more quickly increasing the number of policies with premium rates that more accurately reflect the full risk of flooding, but could exacerbate the difficulty some policyholders may have in adjusting to new rates. In contrast, delaying the elimination of subsidized policies or lengthening the phase-in period would continue to expose the federal government to increased financial risk over a longer time. Moreover, delaying the elimination of subsidies would not represent a long-term fix for those policyholders who could not afford the new premium rates, whenever they came into effect. Target assistance for remaining subsidies. Assistance or a subsidy could be based on the financial need of the property owners, which could help ensure that only those policyholders needing the subsidy would have access to it and retain their coverage, with the rest paying full-risk rates. Targeting subsidies based on need—through a means test, for example—is an approach other federal programs use. However, NFIP does not currently collect the policyholder data required to assess need and determine eligibility and it could be difficult for FEMA to develop and administer such an assistance program in the midst of ongoing management challenges. Moreover, unlike other agencies that provide—and are allocated funds for— traditional subsidies, NFIP does not receive an appropriation to pay for shortfalls in collected premiums caused by its subsidized rates. One approach to maintain subsidies but improve NFIP’s financial stability would be to rate all policies at the full-risk rate and appropriate subsidies for eligible policyholders. Expand mitigation efforts such as elevation, relocation, and demolition of properties. This would include making mitigation mandatory to ensure that more homes were better protected. Mitigation efforts could be used to help reduce or eliminate the long-term risk of flood damage; especially if FEMA targeted the properties that were most costly to the program, such as those with repetitive losses. However, mitigation is expensive for NFIP, taxpayers, and communities. In our October 2008 study of NFIP’s rate-setting, we found that the losses generated by NFIP have created substantial financial exposure for the federal government and U.S. taxpayers—due in part to the program’s rate-setting process. We also found that FEMA’s rate-setting methods, even for full-risk rates, do not result in rates that accurately reflect flood risks. For example, FEMA’s rate-setting process does not fully take into account ongoing and planned development, long-term trends in erosion, or the effects of global climate change. Furthermore, FEMA sets rates on a nationwide basis, combining and averaging many topographic factors that are relevant to flood risks, and does not specifically account for these factors when setting rates for individual properties. Partly because of the rate-setting issues, in our July 2013 report on raising coverage limits or adding optional coverage types, we found that the advantages and disadvantages to making more changes to the program, such as these, would need to be carefully weighed. To determine the financial impact on NFIP of increasing coverage limits, we estimated the potential financial effect on NFIP if coverage limits had been raised in 2002–2011. Higher coverage limits would have been associated with increased net revenue in all fiscal years from 2002 through 2011, except for fiscal years 2004 and 2005 when the program experienced catastrophic losses. The overall results were the same when we conducted the analyses using variations in our assumptions to (1) decrease the premiums by 20 percent below the baseline estimate; (2) decrease the claims by 20 percent below the baseline estimate; and (3) estimate that only 25 percent, 50 percent, or 75 percent of all policyholders increased their coverage. Overall, the financial impact on the program of raising coverage limits would depend on the adequacy of the rates charged for the additional coverage. We also found that adding business interruption coverage to NFIP could be particularly challenging. For example, properly pricing risk, underwriting, and claim processing can be complex. Similarly, offering optional coverage for additional living expenses would have many of the same potential effects on NFIP, although this coverage generally is less complex to administer. In July 2013, we reported that FEMA will require several years to fully implement the Biggert-Waters Act and FEMA officials acknowledged that they have data limitations and other challenges to resolve before eliminating some subsidies as required in the act. The following points highlight some of the challenges we identified: The act eliminated subsidies for residential policies that covered nonprimary residences and business policies. FEMA has data on whether a policy covers a primary residence, but officials stated that the data may be outdated or incorrect. In addition, FEMA categorizes policies as residential and nonresidential rather than residential and business. As a result, FEMA does not have the information to identify nonresidential properties such as schools or churches that are not businesses and continue to be eligible for a subsidy. Beginning in October 2013, FEMA will require applicants for new policies and renewals to provide property status (residential or business). The act states that subsidies will be eliminated for policies that have received cumulative payment amounts for flood-related damage that equaled or exceeded the fair market value of the properties, and for policies that experience damage exceeding 50 percent of the fair market value of properties after enactment. Currently, FEMA is unable to make this determination as it does not maintain data on the fair market value of properties insured by subsidized policies. FEMA officials said that they have been in the process of identifying a data source. The act eliminates subsidies for severe repetitive loss policies and provides a definition of severe repetitive loss for single-family homes. However, it requires FEMA to define severe repetitive loss for multifamily properties and FEMA has not yet developed this definition. The act also requires FEMA to phase in full-risk rates on active policies that no longer are eligible for subsidies, but we found that FEMA generally lacks information needed to establish full-risk rates that reflect flood risk for the properties involved and also lacks a plan for proactively obtaining such information. Federal internal control standards state that agencies should identify and analyze risks associated with achieving program objectives, and use this information as a basis for developing a plan for mitigating the risks. In addition, these standards state that agencies should identify and obtain relevant and needed data to be able to meet program goals. However, in July 2013 we reported that FEMA does not have key information used in determining full-risk rates from all policyholders. According to FEMA officials, not all policyholders have elevation certificates, which document their property’s risk of flooding. Information about elevation is a key element in establishing premium rates on certain properties. Elevation certificates are required for some properties, but optional for others. According to FEMA officials, consistent with the act they are phasing in rate increases (of 25 percent per year) for policyholders who no longer are eligible for subsidies. The increase will continue until the rates reach a specific level or until policyholders supply an elevation certificate that indicates the property’s risk, allowing FEMA to determine the full-risk rate. Although subsidized policies have been identified as a risk to the program because of the financial drain they represent, FEMA does not have a plan to expeditiously and proactively obtain the information needed to set full- risk rates for all of them. Instead, FEMA will rely on certain policyholders to voluntarily obtain elevation certificates, which can be expensive for the property owner. Those at lower risk levels have an incentive to do so because they may then be eligible for lower rates. However, policyholders may not know their risk level, and policyholders with higher risk levels have a disincentive to voluntarily obtain an elevation certificate because they then could pay a higher premium. In our July 2013 report, we concluded that without a plan to expeditiously obtain property-level elevation information, FEMA will continue to lack basic information needed to accurately determine flood risk and continue to base full-risk rate increases for previously subsidized policies on limited estimates. As a result, FEMA’s phased-in rates for previously subsidized policies still may not reflect a property’s full risk of flooding; with some policyholders paying premiums that are below and others paying premiums that exceed full-risk rates. We recommended that FEMA develop and implement a plan, including a timeline, to obtain needed elevation information as soon as practicable. FEMA agreed with this recommendation and plans to evaluate the appropriate approach to obtain or require the submittal of this information. The Biggert-Waters Act also requires a number of other changes that the agency has been starting to implement. For example FEMA must adjust rates to accurately reflect the current risk of flood to properties when an area’s flood map is changed, subject to any other statutory provision in chapter 50 of Title 42 of the Unites States Code. 2013, FEMA has been determining how this provision would affect properties exempted from rate increases when they were remapped. 42 U.S.C. § 4015(e). agency deems appropriate) over a number of years beginning October 1, 2013. We continue to monitor the status of FEMA’s actions related to recommendations we have made in prior reports. In 2008, we recommended that FEMA develop a rate-setting methodology that uses data that results in full-risk premiums that accurately reflect the risk of losses from flooding. account the effects of long-term planned and ongoing development, including climate change. In response to our continued support of this recommendation as well as requirements in the Biggert-Waters Act, FEMA officials stated that they have made progress. For example, FEMA stated they already have revised damage calculations for flooding events that only reach the foundation of the structure, and performed a study to assess the long-term impacts of climate change. FEMA’s ongoing efforts include analyzing water-depth probability curves for the various zones and piloting studies to determine structure elevation and flood depths for various return periods. GAO-09-12. National Association of Insurance Commissioners (NAIC) and conducting other analyses to ensure that WYOs accurately report this information. However, FEMA officials stated that the agency cannot take action that completely addresses our recommendations until the WYOs reliably report to NAIC and that it might take several years before all companies consistently report such information. The agency also has been considering how to best introduce the WYOs’ actual flood-related expenses into payment formulas over the next several years, when FEMA expects to have more reliable financial information and less variation in reported expense ratios. In 2011, we recommended that FEMA improve strategic planning, performance management, and program oversight within and related to NFIP. FEMA agreed with our recommendations and has addressed some of them, such as strategic planning, but it still needs to continue to address the management and operational weaknesses we identified, including human capital planning, acquisition management, policy and claims management systems, financial management, collaboration, and records management. Unless these management issues are addressed, FEMA risks ongoing challenges in effectively and efficiently managing NFIP, including its management and use of data and technology. In conclusion, when we placed NFIP on the high-risk list in 2006, we noted that comprehensive reform likely would be needed to address the financial challenges facing the program. Since passage of the Biggert- Waters Act, FEMA is taking some important first steps toward implementing the reforms the act requires, but the extent to which the changes included in the act and FEMA’s implementation will reduce the financial exposure created by the program is not clear and the program’s long-term financial condition is not yet assured. In addition, our previous work has identified many of the necessary actions that FEMA should take to address a number of ongoing challenges in managing and administering the program. Getting NFIP on a sound footing, both financially and operationally, is important to achieving its goals and at the same time reducing its burden on the taxpayer. Chairman Merkley, Ranking Member Heller, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other staff who made key contributions to this testimony include Jill Naamane and Patrick Ward (Assistant Directors); Isidro Gomez; Karen Jarzynka-Hernandez; Barbara Roesmann; Rhonda Rose; and Jessica Sandler. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
NFIP, established in 1968, provides policyholders with insurance coverage for flood damage. FEMA, within the Department of Homeland Security, is responsible for managing the program. NFIP offers two types of flood insurance premiums to property owners: subsidized and full-risk. The subsidized rates are not based on flood risk and, according to FEMA, represent only about 40-45 percent of the full flood risk. GAO placed NFIP on its high-risk list in 2006 because of concerns about its long-term solvency and related operational issues. GAO was asked to testify about NFIP issues and its recent work on NFIP. This statement discusses (1) the reasons that NFIP is considered high-risk, (2) changes to subsidized policies and implications of potential additional program changes, and (3) additional challenges for FEMA to address. In preparing this statement, GAO relied on its past work on NFIP, including GAO-13-607 , GAO-13-568 , and GAO-13-283 . The National Flood Insurance Program (NFIP) was added to GAO's high-risk list in 2006 and remains high risk due to losses incurred from the 2005 hurricanes and subsequent losses, the financial exposure the program represents for the federal government, and ongoing management and operational challenges. As of July 31, 2013, the program owed approximately $24 billion to the U.S. Treasury (Treasury). NFIP's financial condition highlights structural weaknesses in how the program has been funded--primarily its rate structure. The annual amount that NFIP collects in both full-risk and subsidized premiums is generally not enough to cover its operating costs, claim payments, and principal and interest payments for the debt owed to Treasury, especially in years of catastrophic flooding, such as 2005. This arrangement results in much of the financial risk of flooding being transferred to the federal government and ultimately the taxpayer. Furthermore, weaknesses in NFIP management and operations, including financial reporting processes and internal controls, strategic and human capital planning, and oversight of contractors have placed the program at risk. The Biggert-Waters Flood Insurance Reform Act of 2012 (Biggert-Waters Act) mandated that GAO conduct a number of studies related to actual and potential changes to NFIP, including analyses of remaining subsidies and the effect of increasing coverage limits or adding coverage options. In a study of remaining subsidies, GAO estimated that with the changes in the Biggert-Waters Act approximately 438,000 policies no longer are eligible for subsidies, including about 345,000 policies for nonprimary residences, about 87,000 business policies, and about 9,000 policies for single-family properties that had severe-repetitive losses. Subsidies on most of the approximately 715,000 remaining subsidized policies are expected to be eliminated over time as properties are sold or coverage lapses, as are previous exemptions from rate increases after flood zone map revisions. Reducing the financial impact of remaining subsidized policies on NFIP generally could involve accelerating elimination of subsidies, targeting assistance for subsidies, or expanding mitigation efforts, or some combination. Each approach has advantages and disadvantages. In GAO's 2008 study about rate-setting, GAO noted that the losses generated by NFIP have created substantial financial exposure for the federal government and U.S. taxpayers--due in part to its rate-setting process. Partly because of these rate-setting issues, GAO concluded in a July 2013 report that the advantages and disadvantages to additional changes to the program, such as raising coverage limits or adding optional coverage types, would need to be carefully weighed. The Federal Emergency Management Agency (FEMA) will require several years to fully implement the Biggert-Waters Act. FEMA officials acknowledged that they have challenges to resolve. These include updating and correcting information on whether a policy is for a primary or secondary residence, determining the fair market value of insured properties, and developing a definition of severe repetitive loss for multifamily properties. Further, FEMA must establish full-risk rates that reflect flood risk for active policies that no longer are eligible for subsidies; but it does not have a plan to do so. In an effort to update payment formulas to insurance companies, as GAO recommended, FEMA has begun receiving actual flood-related information from some insurance companies but all companies are not reporting the information consistently. GAO continues to support its previous recommendations made to FEMA that focus on the need to address management and operational challenges, ensure that the methods and data used to set NFIP rates accurately reflect the risk of losses from flooding, and that oversight of NFIP and insurance companies responsible for selling and servicing flood policies is strengthened. FEMA agreed with these recommendations and is taking steps to address them.
You are an expert at summarizing long articles. Proceed to summarize the following text: The Immigration Reform and Control Act of 1986 created the VWP as a pilot program, and the Visa Waiver Permanent Program Act permanently established the program in October 2000. The program’s purpose is to facilitate the legitimate travel of visitors for business or tourism. By providing visa-free travel to the United States, the program is intended to boost international business and tourism, as well as airline revenues, and create substantial economic benefits to the United States. Moreover, the program allows State to allocate more resources to visa-issuing posts in countries with higher risk applicant pools. In November 2002, Congress passed the Homeland Security Act of 2002, which established DHS and gave it responsibility for establishing visa policy, including policy for the VWP. Previously, Justice had overall responsibility for managing the program. In July 2004, DHS created the Visa Waiver Program Oversight Unit within the Office of International Enforcement and directed that unit to oversee VWP activities and monitor participating VWP countries’ adherence to the program’s statutory and policy requirements. In September 2007, the office was renamed the Visa Waiver Program Office. To help fulfill its responsibilities, DHS established an interagency working group comprising representatives from State, Justice, and several DHS component agencies and offices, including U.S. Customs and Border Protection (CBP) and U.S. Immigration and Customs Enforcement. Since the attacks on the United States on September 11, 2001, Congress has passed several other laws to strengthen border security policies and procedures. For example, the Enhanced Border Security and Visa Entry Reform Act of 2002 increased the frequency—from once every 5 years to at least once every 2 years—of mandated assessments of the effect of each country’s continued participation in the VWP on U.S. security, law enforcement, and immigration interests. The 9/11 Act also added security requirements for all VWP countries, such as the requirement that countries enter into an agreement with the United States to share information on whether citizens and nationals of that country traveling to the United States represent a threat to the security or welfare of the United States or U.S. citizens. When the Visa Waiver Pilot Program was established in 1986, participation was limited to eight countries. Since then, the VWP has expanded to 36 countries. Figure 1 shows the locations of the current member countries. To qualify for the VWP a country must offer reciprocal visa-free travel privileges to U.S. citizens; have had a refusal rate of less than 3 percent for the previous fiscal year for its nationals who apply for business and tourism visas; issue machine-readable passports to its citizens; enter into an agreement with the United States to report or make available through Interpol or other means as designated by the Secretary of Homeland Security information about the theft or loss of passports; accept the repatriation of any citizen, former citizen, or national against whom a final order of removal is issued no later than 3 weeks after the order is issued; enter into an agreement with the United States to share information regarding whether citizens and nationals of that country traveling to the United States represents a threat to U.S. security or welfare; and be determined not to compromise the law enforcement (including immigration enforcement) or security interests of the United States by its inclusion in the program. In addition, all passports issued after October 26, 2005, must contain a digital photograph in the document for travel to the United States under the program, and passports issued after October 26, 2006, must be e-passports that are tamper-resistant and incorporate a biometric identifier. Nationals from countries that have joined the VWP since 2008 must use e-passports in order to travel under the VWP. Effective July 1, 2009, all emergency or temporary passports must be e-passports as well for use under the VWP. To be eligible to travel without a visa under the program, nationals of VWP countries must have received an authorization to travel under the VWP through ESTA; have a valid passport issued by the participating country and be a national seek entry for 90 days or less as a temporary visitor for business or have been determined by CBP at the U.S. port of entry to represent no threat to the welfare, health, safety, or security of the United States; have complied with conditions of any previous admission under the program (for example, individuals must not have overstayed the 90-day limit during prior visits under the VWP); if entering by air or sea, possess a return trip ticket to any foreign destination issued by a carrier that has signed an agreement with the U.S. government to participate in the program, and must have arrived in the United States aboard such a carrier; and if entering by land, have proof of financial solvency and a domicile abroad to which they intend to return. Travelers who do not meet these requirements are required to obtain a visa from a U.S. embassy or consulate overseas before traveling to the United States. Unlike visa holders, VWP travelers generally may not apply for a change in status or an extension of the allowed period of stay. Individuals who have been refused admission to the United States previously must also apply for a visa. VWP travelers waive their right to review or appeal a CBP officer’s decision regarding their admissibility at the port of entry or to contest any action for removal, other than on the basis of an application for asylum. DHS has implemented ESTA to meet the 9/11 Act requirement intended to enhance program security and has taken steps to minimize the burden on travelers to the United States added by the new requirement, but it has not fully analyzed the risks of carrier and passenger noncompliance with the requirement. DHS developed ESTA to collect passenger data and complete security checks on the data before passengers board a U.S. bound carrier. In developing and implementing ESTA, DHS took several steps to minimize the burden associated with ESTA use. For example, ESTA reduced the requirement that passengers provide biographical information to DHS officials from every trip to once every 2 years. In addition, because of ESTA, DHS has informed passengers who do not qualify for VWP travel that they need to apply for a visa before they travel to the United States. Moreover, most travel industry officials we interviewed in six VWP countries praised DHS’s widespread ESTA outreach efforts, reasonable implementation time frames, and responsiveness to feedback but expressed dissatisfaction over ESTA fees. Also, although carriers complied with the ESTA requirement to verify ESTA approval for almost 98 percent of VWP passengers before boarding them in 2010, DHS does not have a target completion date for a review to identify potential security risks associated with the small percentage of cases of traveler and carrier noncompliance with the ESTA requirement. Pursuant to the 9/11 Act, DHS implemented ESTA, an automated, Web- based system, to assist in assessing passengers’ eligibility to travel to the United States under the VWP by air or sea before they board a U.S. bound carrier. DHS announced ESTA as a new requirement for travelers entering the United States under the VWP on June 9, 2008, and began accepting ESTA applications on a voluntary basis in August 2008. Beginning January 12, 2009, DHS required all VWP travelers to apply for ESTA approval prior to travel to the United States. DHS began enforcing compliance with ESTA requirements in March 2010, exercising the right to fine a carrier or rescind its VWP signatory status for failure to comply with the ESTA requirement. Although passengers may apply for ESTA approval anytime before they board a plane or ship bound for the United States, DHS recommends that travelers apply when they begin preparing travel plans. Prior to ESTA’s implementation, all travelers from VWP countries manually completed a form—the I-94W—en route to the United States, supplying biographical information and answering questions to determine eligibility for the VWP. DHS officials collected the forms from VWP passengers at U.S. ports of entry and used the information on the forms to qualify or disqualify the passengers for entry into the United States without a visa. DHS uses ESTA to electronically collect VWP applicants’ biographical information and responses to eligibility questions. The ESTA application requires the same information collected through the I-94W forms. When an applicant submits an ESTA application, DHS systems evaluate the applicant’s biographical information and responses to VWP eligibility questions. (See table 1.) If the DHS evaluation results in a denial of the application, the applicant is directed to apply for a U.S. visa. For all other applications, if this review process locates no information requiring further analysis, DHS notifies the applicant that the application is approved; if the process locates such information, DHS notifies the applicant that the application is pending, and DHS performs a manual check on the information. For example, if an applicant reports that a previous U.S. visa application was denied, DHS deems the ESTA application pending and performs additional review. If on further review of any pending application DHS determines that information disqualifies the applicant from VWP travel, the application is denied, and the individual is directed to apply for a visa; otherwise the applicant is approved. Figure 2 illustrates the ESTA application review process. (See app. II for information on how to apply for ESTA.) According to DHS data, the number of individuals submitting ESTA applications increased from about 180,000 per month in 2008, when applying was voluntary, to more than 1.15 million per month in 2009 and 2010 after DHS made ESTA mandatory. DHS approved over 99 percent of the almost 28.6 million ESTA applications submitted from August 2008 through December 2010, but it also denied the applications of thousands of individuals it deemed ineligible to travel to the United States under the VWP. The denial rate has decreased slightly from 0.42 percent in 2008 to 0.24 percent in 2010. (See fig. 3.) DHS data show that DHS denied 77,132 of the almost 28.6 million applications for VWP travel submitted through ESTA from 2008 through 2010. Reasons for denials included applicants’ responses to the eligibility questions, as well as DHS’s discovery of other information that disqualified applicants from travel under the VWP. Examples are as follows: DHS denied 19,871 applications because of applicant responses to the eligibility questions. DHS denied 36,744 pending applications because of the results of manual reviews of passenger data. DHS denied 15,078 applications because the applicants had unresolved cases of a lost or stolen passport that DHS decided warranted an in-person visa interview with a State consular officer. In addition, ESTA applications are regularly reevaluated as new information becomes available to DHS, potentially changing applicants’ ESTA status. In developing and implementing ESTA, DHS has taken steps to minimize the burden associated with ESTA’s use. Less frequent applications. ESTA approval for program participants generally remains valid for 2 years. Prior to ESTA implementation, passengers traveling under the program were required to complete the I- 94W form to determine their program eligibility each time they boarded a carrier to the United States. When DHS implemented ESTA, the burden on passengers increased because DHS also required ESTA applicants to complete an I-94W form. However, on June 29, 2010, DHS eliminated the I- 94W requirement for most air and sea travelers who had been approved by ESTA. According to travel industry officials in the six VWP countries we visited, this change has simplified travel for many travelers, especially business travelers who travel several times each year. DHS officials said the change also eliminated the problems of deciphering sometimes illegible handwriting on the I-94W forms. Earlier notice of ineligibility. ESTA notifies passengers of program ineligibility, and therefore of the need to apply for a visa, before they embark for the United States. Prior to ESTA implementation, passengers from VWP countries did not learn until reaching the U.S. port of entry whether they were eligible to enter under the VWP or would be required to obtain a visa. Because DHS received passengers’ completed I-94W forms at the port of entry, DHS officials did not recommend that carriers prevent passengers from VWP countries from boarding a U.S. bound carrier without a visa unless they were deemed ineligible based on other limited preboarding information provided by carriers. Widespread U.S. government outreach. VWP country government and travel industry officials praised widespread U.S. government efforts to provide information about the ESTA requirements. After announcing ESTA, DHS began an outreach campaign in VWP countries and for foreign government embassy staff in the United States, with the assistance of other U.S. agencies, to publicize the requirement. DHS officials said they spent $4.5 million on ESTA outreach efforts. Although none of the six embassies we visited tracked the costs associated with outreach, each embassy provided documentation of their use of many types of outreach efforts listed in table 2. VWP country government officials and travel industry officials we met said that although they were initially concerned that ESTA implementation would be difficult and negatively affect airlines and many VWP passengers, implementation went more smoothly than expected. Reasonable implementation time frames. Most of the VWP country airline officials with whom we met said that the ESTA implementation time frames set by DHS were reasonable. In 2008, DHS introduced ESTA and made compliance voluntary. The following year, DHS made ESTA mandatory but did not levy fines if airlines did not verify passengers’ ESTA approval before boarding them. This allowed the U.S. government more time to publicize the requirement, according to DHS officials. Enforcement began in March 2010. According to most of the officials we interviewed from 17 airlines in the six VWP countries we visited, the phased-in compliance generally allowed passengers sufficient time to learn about the ESTA requirement and allowed most airlines sufficient time to update their systems to meet the requirement. ESTA officials said that the phased- in compliance also provided time to fix problems with the system before enforcing airline and passenger compliance. DHS responsiveness to travel industry feedback. VWP travel industry officials said that DHS officials’ efforts to adapt ESTA in response to feedback have clarified the application process. Since initial implementation of ESTA in 2008, DHS has issued updates to the system on 21 occasions. According to DHS officials, many of these changes addressed parts of the application that were unclear to applicants. For example, DHS learned from some travel industry officials that many applicants did not know how to answer a question on the application about whether they had committed a crime of moral turpitude because they did not know the definition of “moral turpitude.” In September 2010, DHS released an updated ESTA application that included a definition of the term directly under the question. Further, updates have made the ESTA application available in 22 languages instead of only English. DHS also made it possible for denied applicants to reapply and be approved if they mistakenly answered “yes” to select eligibility questions. Although travel industry officials we met with in six VWP countries said there are still ways ESTA should be improved, they said that DHS’s responsiveness in amending the ESTA application had made the system more user friendly. Shorter reported passenger processing times. According to a study commissioned by DHS and conducted at three U.S. ports of entry, ESTA has reduced the average time DHS takes to process a VWP passenger before deciding whether to admit them into the United States by a range of between 17.8 and 54 percent. The study attributed this time savings to factors such as the reduction in number of documents DHS officers needed to handle and evaluate and the reduction in data entry needed at the port of entry. Although DHS took steps to minimize the burden imposed by ESTA implementation, almost all government and travel industry officials we met in six VWP countries expressed dissatisfaction over the Travel Promotion Act of 2009 (TPA) fee collected as part of the ESTA application. In September 2010, the U.S. government began to charge ESTA applicants a $14 fee when they applied for ESTA approval, including $10 for the creation of a corporation to promote travel to the United States and $4 to fund ESTA operations. According to many of the VWP country government and travel industry officials with whom we met, the TPA fee is unfair because it burdens those traveling to the United States with an added fee to encourage others to travel to the United States. Some of the officials pointed out that it was unrelated to VWP travel and that it runs counter to the program objective of simplifying travel for VWP participants. DHS officials said that many government and travel industry officials from VWP countries view the fee as a step away from visa-free travel and consider ESTA with the fee “visa-lite.” By comparison, a nonimmigrant visitor visa costs over $100 but is generally valid for five times as long as ESTA approval. Several foreign officials said they expected that the fee amount would continue to rise over time. DHS officials stated that they cannot control the TPA portion of the ESTA fee because it was mandated by law. In addition, some airline officials expressed concern that the ESTA requirement was one of many requirements imposed by DHS that required the carriers to bear the cost of system updates. DHS officials said that the ESTA requirement did impose a new cost to carriers, but that it was necessary to strengthen the security of the VWP. According to DHS, air and sea carriers are required to verify that each passenger they board has ESTA approval before boarding them. Carriers’ compliance with the requirement has increased since DHS made ESTA mandatory and has exceeded 99 percent in recent months. DHS data show the following: 2008. In 2008, when VWP passenger and carrier compliance was voluntary, airlines and sea carriers verified ESTA approval for about 5.4 percent of passengers boarded under the VWP. According to DHS officials, carriers needed time to update their systems to receive passengers’ ESTA status, and DHS needed time to publicize the new travel requirement. 2009. ESTA became mandatory in January 2009, and carriers verified ESTA approval for about 88 percent of passengers boarded under the VWP that year. 2010. In March 2010, DHS began enforcing carrier compliance. In that year, carriers verified ESTA approval for almost 98 percent of VWP passengers. As of January 2011, DHS had imposed fines on VWP carriers for 5 of the passengers who had been allowed to board without ESTA approval. Figure 4 shows the percentage of VWP passengers boarded by carriers who had verified the passengers’ ESTA approval. In addition, from September 2010 through January 2011, carrier compliance each month exceeded 99 percent. Although carriers verified ESTA approval for almost 98 percent of VWP passengers before boarding them for VWP travel in 2010, DHS has not fully analyzed the potential risks posed by cases where carriers boarded passengers for VWP travel without verifying that they had ESTA approval. In 2010, about 2 percent—364,086 VWP passengers—were boarded without verified ESTA approval. For most of these passengers—363,438, or about 99.8 percent—no ESTA application had been recorded. The remainder without ESTA approval—648, or about 0.2 percent—were passengers whose ESTA applications had been denied. DHS officials told us that, although there is no official agency plan for monitoring and oversight of ESTA, the ESTA office is undertaking a review of each case of a carrier’s boarding a VWP traveler without an approved ESTA application; however, DHS has not established a target date for completing this review. In its review of these cases, DHS officials said they expect to determine why the carrier boarded the passengers, whether and why DHS admitted these individuals into the United States, and whether the airline or sea carrier should be fined for noncompliance. DHS tracks some data on passengers that travel under the VWP without verified ESTA approval but does not track other data that would help officials know the extent to which noncompliance poses a risk to the program. For example, although DHS officials said that about 180 VWP travelers who arrive at a U.S. port of entry without ESTA approval are admitted to the United States each day, they have not tracked how many, if any, of those passengers had been denied by ESTA. DHS also reported that 6,486 VWP passengers were refused entry into the United States at the port of entry in 2010, but that number includes VWP passengers for whom carriers had verified ESTA approval. Officials did not track how many of those had been boarded without verified ESTA approval. DHS also did not know how many passengers without verified ESTA approval were boarded with DHS approval after system outages precluded timely verification of ESTA approval. Without a completed analysis of noncompliance with ESTA requirements, DHS is unable to determine the level of risk that noncompliance poses to VWP security and to identify improvements needed to minimize noncompliance. In addition, without analysis of data on travelers who were admitted to the United States without a visa after being denied by ESTA, DHS cannot determine the extent to which ESTA is accurately identifying individuals who should be denied travel under the program. Although DHS and partners at State and Justice have made progress in negotiating information-sharing agreements with VWP countries, required by the 9/11 Act, only half of the countries have entered into all required agreements. In addition, many of the agreements entered into have not been implemented. The 9/11 Act does not establish an explicit deadline for compliance, but DHS with support from State and Justice has produced a completion schedule that requires agreements to be entered into by the end of each country’s current or next biennial review cycle, the last of which will be completed by June 2012. In coordination with State and Justice, DHS also outlined measures short of termination that may be applied to VWP countries not meeting their compliance date. The 9/11 Act specifies that each VWP country must enter into agreements with the United States to share information regarding whether citizens and nationals of that country traveling to the United States represent a threat to the security or welfare of the United States and to report lost or stolen passports. DHS, in consultation with other agencies, has determined that VWP countries can satisfy the requirement by entering into the following three bilateral agreements: Homeland Security Presidential Directive 6 (HSPD-6), Preventing and Combating Serious Crime (PCSC), and Lost and Stolen Passports (LASP). According to DHS officials, countries joining the VWP after the 9/11 Act entered into force are required to enter into HSPD-6 and PCSC agreements with the United States as a condition of admission into the program. In addition, prior to joining the VWP, such countries are required to enter into agreements containing specific arrangements for information sharing on lost and stolen passports. As illustrated in table 3 below, DHS, State, and Justice have made some progress with VWP countries in entering into the agreements. All VWP countries and the United States share some information with one another on some level, but the existence of a formal agreement improves information sharing, according to DHS officials. As opposed to informal case-by-case information sharing, formal agreements expand the pool of information to which the United States has systematic access. They can draw attention to and provide information on individuals of whom the United States would not otherwise be aware. According to officials, formal agreements generally expedite the sharing of information by laying out specific terms that can be easily referred to when requesting data. DHS officials observed that timely access to information is especially important for CBP officials at ports of entry. HSPD-6 agreements establish a procedure between the United States and partner countries to share watchlist information about known or suspected terrorists. As of January 2011, 19 of the 36 VWP countries had signed HSPD-6 agreements, and 13 have begun sharing information according to the signed agreements. (See table 3.) Justice’s Terrorist Screening Center (TSC) and State have the primary responsibility to negotiate and conclude these information-sharing agreements. An interagency working group, co-led by TSC and State that also includes representatives from U.S. law enforcement, intelligence, and policy communities, addresses issues with the exchange of information and coordinates efforts to enhance information exchange. While the agreements are based on a template that officials use as a starting point for negotiations, according to TSC officials, the terms of each HSPD-6 agreement are unique, prescribing levels of information sharing that reflect the laws, political will, and domestic policies of each partner country. TSC officials said most HSPD-6 agreements are legally nonbinding. Officials said that this allows more flexibility in information-sharing procedures and simplifies negotiations with officials from partner countries. The TSC officials noted that the nonbinding nature of the agreements may allow some VWP countries to avoid bureaucratic and political hurdles. Noting that State and TSC continue to negotiate HSPD-6 agreements with VWP countries, officials cited concerns regarding privacy and data protection expressed by many VWP countries as reasons for the delayed progress. According to these officials, in some cases, domestic laws of VWP countries limit their ability to commit to sharing some information, thereby complicating and slowing the negotiation process. The terms of HSPD-6 agreements are also extremely sensitive, TSC officials noted, and therefore many HSPD-6 agreements are classified. Officials expressed concern that disclosure of the agreements themselves might either (1) cause countries that had already signed agreements to become less cooperative in sharing data on known or suspected terrorists and reduce the exchange of information or (2) cause countries in negotiation to become less willing to sign agreements or insist on terms prescribing less information sharing. The value and quality of information received through HSPD-6 agreements vary, and some partnerships are more useful than others, according to TSC officials. The officials stated that some partner countries were more willing than others to share data on known or suspected terrorists. For example, according to TSC officials, some countries do not share data on individuals suspected of terrorist activity but only on those already convicted. In other cases, TSC officials stated that some partner countries did not have the technical capacity to provide all information typically obtained through HSPD-6 agreements. For example, terrorist watchlist data include at least the name and date of birth of the suspect and may also include biometric information such as fingerprints or photographs. According to DHS officials, some member countries do not have the legal or technical ability to store such information. TSC has evidence that information is being shared as a result of HSPD-6 agreements. They provided the number of encounters with known or suspected terrorists generated through sharing watchlist information with foreign governments. TSC officials noted that they viewed these data as one measure of the relevance of the program, but not as comprehensive performance indicators. Although TSC records the number of encounters, HSPD-6 agreements do not contain terms requiring partner countries to reveal the results of these encounters, and there is no case management system to track and close them out, according to TSC officials. The PCSC agreements establish the framework for law enforcement cooperation by providing each party automated access to the other’s criminal databases that contain biographical, biometric, and criminal history data. (See table 3.) As of January 2011, 18 of the 36 VWP countries had met the PCSC information-sharing agreement requirement, but the networking modifications and system upgrades required to enable this information sharing to take place have not been completed for any VWP countries. The language of the PCSC agreements varies slightly because, according to agency officials, partner countries have different legal definitions of what constitutes a serious crime or felony, as well as varying demands regarding data protection provisions. Achieving greater progress negotiating PCSC agreements has been difficult, according to DHS officials, because the agreements require lengthy and intensive face-to-face discussions with foreign governments. Justice and DHS, with assistance from State, negotiate the agreements with officials from partner countries that can include representatives from their law enforcement and justice ministries, as well as their diplomatic corps. Further, sharing sensitive personal information with the United States is publicly unpopular in many VWP countries, even if the countries’ law enforcement agencies have no reluctance to share information. Officials in some VWP countries told us that efforts to overcome political barriers have caused further delays. Though officials expect to complete networking modifications necessary to allow queries of Spain’s and Germany’s criminal databases in 2011, the process is a legally and technically complex one that has not yet been completed for any of the VWP countries. According to officials, DHS is frequently not in a position to influence the speed of PCSC implementation for a number of reasons. For example, according to DHS officials, some VWP countries require parliamentary ratification before implementation can begin. Also U.S. and partner country officials must develop a common information technology architecture to allow queries between databases. In a 2006 GAO report, we found that not all VWP countries were consistently reporting data on lost and stolen passports. We recommended that DHS develop clear standard operating procedures for such reporting, including a definition of timely reporting. As of January 2011, all VWP countries were sharing lost and stolen passport information with the United States, and 34 of the 36 VWP countries had entered into LASP agreements. (See table 3.) The 9/11 Act requires VWP countries to enter into an agreement with the United States to report, or make available to the United States through Interpol or other means as designated by the Secretary of Homeland Security information about the theft or loss of passports. According to DHS officials, other international mandates have helped the United States to obtain LASP information. Since 2005, all European Union countries have been mandated to send data on lost and stolen passports to Interpol for its Stolen and Lost Travel Documents database. In addition, Australia and New Zealand have agreements to share lost and stolen passport information through the Regional Movement Alert System. According to officials, in fiscal year 2004, more than 700 fraudulent passports from VWP countries were intercepted at U.S. ports of entry; however, by fiscal year 2010, this number had decreased to 64. DHS officials attributed the decrease in the use of fraudulent passports in part to better LASP reporting to Interpol. More complete data has allowed DHS to identify more individuals attempting VWP travel with a passport that has been reported lost or stolen before they begin travel. Although the 9/11 Act does not establish an explicit deadline, DHS, with the support of partners at State and Justice, has produced a compliance schedule that requires agreements to be entered into by the end of each country’s current or next biennial review cycle, the last of which will be completed by June 2012. In March 2010, State sent a cable to posts in all VWP countries that instructed the appropriate posts to communicate the particular compliance date to the government of each noncompliant VWP country. However, DHS officials expressed concern that some VWP countries may not have entered into all agreements by the specified compliance dates. According to DHS officials, termination from the VWP is one potential consequence for VWP countries that do not enter into information-sharing agreements. However, U.S. officials described termination as undesirable, saying that it would significantly impact diplomatic relations and would weaken any informal exchange of information. Further, termination would require all citizens from the country to obtain visas before traveling to the United States. According to officials, particularly in the larger VWP countries, this step would overwhelm consular offices and discourage travel to the United States, thereby damaging trade and tourism. U.S. embassy officials in France told us that when the United States required only a small portion of the French traveling population—those without machine-readable passports—to obtain visas, U.S. embassy officials logged many overtime hours, while long lines of applicants extended into the embassy courtyard. DHS helped write a classified strategy document that outlines a contingency plan listing possible measures short of termination from the VWP that may be taken if a VWP country does not meet its specified compliance date for entering into information-sharing agreements. The strategy document provides steps that would need to be taken prior to selecting and implementing one of these measures. According to officials, DHS plans to decide which measures to apply on a case-by-case basis. DHS conducts reviews to determine whether issues of security, law enforcement, or immigration affect VWP country participation in the program; however, the agency has not completed half of the mandated biennial reports resulting from these reviews in a timely manner. In 2002, Congress mandated that, at least once every 2 years, DHS evaluate the effect of each country’s continued participation in the program on the security, law enforcement, and immigration interests of the United States. The mandate also directed DHS to determine based on the evaluation whether each VWP country’s designation should continue or be terminated and to submit a written report on that determination to select congressional committees. To fulfill this requirement, DHS conducts reviews of VWP countries that examine and document, among other things, counterterrorism and law enforcement capabilities, border control and immigration programs and policies, and security procedures. To document its findings, DHS composes a report on each VWP country reviewed and a brief summary of the report to submit to congressional committees. In conjunction with DHS’s reviews, the Director of National Intelligence (DNI) produces intelligence assessments that DHS reviews prior to finalizing its VWP country biennial reports. According to VWP officials, they visited 12 program countries in fiscal year 2009 and 10 countries in fiscal year 2010 to gather the data needed to complete these reports. As of February 2011, the Visa Waiver Program Office had completed 3 country visits and anticipated conducting 10 more for fiscal year 2011. If issues of concern are identified during the VWP country review process, DHS drafts an engagement strategy documenting the issues of concern and suggesting recommendations for addressing the issues. According to VWP officials, they also regularly monitor VWP country efforts to stay informed about any emerging issues that may affect the countries’ VWP status. In 2006, we found that DHS had not completed the required biennial reviews in a timely fashion, and we recommended that DHS establish protocols including deadlines for biennial report completion. DHS established protocols in 2007 that include timely completion of biennial reports as a goal. Our current review shows that DHS has not completed the latest biennial reports for 50 percent, or 18 of the 36 VWP countries in a timely manner. Also, over half of those reports are more than 1 year overdue. In the case of two countries, DHS was unable to demonstrate that they had completed reports in over 4 years. Further, according to the evidence supplied by DHS, of the 17 reports completed since the beginning of 2009, over 25 percent were transmitted to Congress 3 or more months after report completion, and 2 of those after more than 6 months. DHS cited a number of reasons for the reporting delays, including a lack of resources needed to complete timely reports. In addition, DHS officials said that they sometimes intentionally delayed report completion for two reasons: (1) because they frequently did not receive DNI intelligence assessments in a timely manner and needed to review these before completing VWP country biennial reports or (2) in order to incorporate anticipated developments in the status of information-sharing agreement negotiations with a VWP country. Further, DHS officials cited lengthy internal review as the primary reason for delays in submitting the formal summary reports to Congress. Without timely reports, it is not clear to Congress whether vulnerabilities exist that jeopardize continued participation in the VWP. The VWP facilitates travel for nationals from qualifying countries, removing the requirement that they apply in-person at a U.S. embassy for a nonimmigrant visa for business or pleasure travel of 90 days or less. In an attempt to facilitate visa-free travel without sacrificing travel security, Congress has mandated security measures such as ESTA, information- sharing requirements, and VWP country biennial reviews. While ESTA has added a fee and a new pretravel requirement that place additional burdens on the VWP traveler, it has reduced the burden on VWP travelers in several other ways. DHS does not fully know the extent to which ESTA has mitigated VWP risks, however, because its review of cases of passengers being permitted to travel without verified ESTA approval is not yet complete. Although the percentage of VWP travelers without verified ESTA approval is very small, DHS oversight of noncompliant travelers may reduce the risk that an individual that poses a security risk to the United States could board a plane or ship traveling to the United States. Even if DHS has authority to deny individuals entry to the United States in such cases, ESTA was designed to screen such individuals before they embark on travel to the United States. Moreover, with only half of the countries participating in the VWP in full compliance with the requirement to enter into information-sharing agreements with the United States, DHS may not have sufficient information to deny participation in the VWP to individuals who pose a security risk to the United States. In addition, the congressional mandate requiring VWP country biennial reports provides important information to Congress on security measures in place in VWP countries but also on potential vulnerabilities that could affect the countries’ future participation in the program. Because DHS has not consistently submitted the reports in a timely manner since the legal requirement was imposed in 2002, Congress does not have the assurance that DHS efforts to require program countries to minimize vulnerabilities and its recommendations for continued status in the VWP are based on up- to-date assessments. To ensure that DHS can identify and mitigate potential security risks associated with the VWP, we recommend that the Secretary of Homeland Security take the following two actions: establish time frames for the regular review and documentation of cases of VWP passengers traveling to a U.S. port of entry without verified ESTA approval, and take steps to address delays in the biennial country review process so that the mandated country reports can be completed on time. DHS provided written comments on a draft of this report. These comments are reprinted in appendix III. DHS, State, and Justice provided technical comments that we have incorporated into this report, as appropriate. In commenting on the draft, DHS stated that it concurred with GAO’s recommendations and expects to be able to implement them. DHS provided additional information on its efforts to ensure that VWP countries remain compliant with program requirements and to monitor and assess issues that may pose a risk to U.S. interests. DHS also provided information on actions it is taking to resolve the issues identified in the audit. For example, DHS stated it will have established procedures by the end of May 2011 to perform quarterly reviews of a representative sample of VWP passengers who do not comply with the ESTA requirement. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, the Secretary of State, the Attorney General, and other interested parties. The report also will be available on the GAO Web site at no charge at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-4268 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix IV. To assess the implementation of the Electronic System for Travel Authorization (ESTA), we reviewed relevant documentation, including 2006 and 2008 GAO reports evaluating the Visa Waiver Program (VWP) and statistics on program applicants and travelers. Between June and September 2010, we interviewed consular, public diplomacy, and law enforcement officials at U.S. embassies in six VWP countries: France, Ireland, Japan, South Korea, Spain, and the United Kingdom. We also interviewed political and commercial officers at embassies in five of these countries. While the results of our site visits are not generalizable, they provided perspectives on VWP and ESTA implementation. We met with travel industry officials, including airline representatives, and foreign government officials in the six countries we visited to discuss ESTA implementation. We selected the countries we visited so that we could interview officials from VWP countries in diverse geographic regions that varied in terms of information-sharing signature status, number of travelers to the United States, and the existence in-country of potential program security risks. We met with officials from the Department of Homeland Security (DHS) in Washington, D.C. We used data provided by DHS from the ESTA database to assess the usage of the program and airline compliance with the ESTA requirements and determined that the data was sufficiently reliable for our purposes. To evaluate the status of information sharing, we analyzed data regarding which countries had signed the agreements and interviewed DHS, Department of State (State), and Department of Justice (Justice) officials in Washington, D.C., and International Criminal Police Organization (Interpol) officials in Lyon, France. We reviewed the Implementing Recommendations of the 9/11 Commission Act of 2007, which contained the information-sharing requirement. We received and reviewed copies of many Preventing and Combating Serious Crime and Lost and Stolen Passport agreements. While conducting our fieldwork, we confirmed the status of the agreements in each of the countries we visited. We determined that the data on the status of information sharing were sufficiently reliable for our purposes. However, we were unable to view the signed Homeland Security Presidential Directive 6 agreements, because Justice’s Terrorist Screening Center declined to provide us requested access to the agreements. We also met with foreign government officials from agencies involved with VWP information-sharing agreement negotiations in the six countries we visited to discuss their views regarding VWP information-sharing negotiations with U.S. officials. In addition, with Interpol officials in France, we discussed the status of the sharing of information on lost and stolen passports. Interpol officials were unable to provide country-specific statistics regarding sharing of lost and stolen passport information due to its data privacy policy. To assess DHS efforts to complete timely biennial reviews of each VWP country, we reviewed DHS documents, as well as the links to completed reviews on the DHS intranet Web site to determine whether the reviews were completed in a timely manner. We also reviewed a 2006 GAO report that recommended improvements to the timeliness of DHS’s biennial reporting process. We conducted this performance audit from January 2010 to May 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The official ESTA application can be completed online at https://esta.cbp.dhs.gov/esta/. (See fig 5.) DHS officials told us they actively publicize the official Web site, because many unofficial Web sites exist that charge an additional fee to fill out an application for an individual. They said the unofficial Web sites are not fraudulent if they do not use the official DHS or ESTA logos and provide the service they promise. In addition to the individual named above, Anthony Moran, Assistant Director; Jeffrey Baldwin-Bott; Mattias Fenton; Reid Lowe; and John F. Miller made key contributions to this report. Martin DeAlteriis, Joyce Evans, Etana Finkler, Richard Hung, Mary Moutsos, Jena Sinkfield, and Cynthia S. Taylor also provided technical assistance.
The Visa Waiver Program (VWP) allows eligible nationals from 36 member countries to travel to the United States for tourism or business for 90 days or less without a visa. In 2007, Congress required the Secretary of Homeland Security, in consultation with the Secretary of State, to implement an automated electronic travel authorization system to determine, prior to travel, applicants' eligibility to travel to the United States under the VWP. Congress also required all VWP member countries to enter into an agreement with the United States to share information on whether citizens and nationals of that country traveling to the United States represent a security threat. In 2002, Congress mandated that the Department of Homeland Security (DHS) review, at least every 2 years, the security risks posed by each VWP country's participation in the program. In this report, GAO evaluates (1) DHS's implementation of an electronic system for travel authorization; (2) U.S. agencies' progress in negotiating informationsharing agreements; and (3) DHS's timeliness in issuing biennial reports. GAO reviewed relevant documents and interviewed U.S., foreign government, and travel industry officials in six VWP countries. DHS has implemented the Electronic System for Travel Authorization (ESTA) and has taken steps to minimize the burden associated with the new program requirement. However, DHS has not fully evaluated security risks related to the small percentage of VWP travelers without verified ESTA approval. DHS requires applicants for VWP travel to submit biographical information and answers to eligibility questions through ESTA prior to travel. Travelers whose ESTA applications are denied can apply for a U.S. visa. In developing and implementing ESTA, DHS has made efforts to minimize the burden imposed by the new requirement. For example, although travelers formerly filled out a VWP application form for each journey to the United States, ESTA approval is generally valid for 2 years. Most travel industry officials GAO interviewed in six VWP countries praised DHS's widespread ESTA outreach efforts, reasonable implementation time frames, and responsiveness to feedback, but expressed dissatisfaction with the costs associated with ESTA. In 2010, airlines complied with the requirement to verify ESTA approval for almost 98 percent of VWP passengers prior to boarding, but the remaining 2 percent-- about 364,000 travelers--traveled under the VWP without verified ESTA approval. DHS has not yet completed a review of these cases to know to what extent they pose a risk to the program. To meet the legislative requirement, DHS requires that VWP countries enter into three information-sharing agreements with the United States; however, only half of the countries have fully complied with this requirement and many of the signed agreements have not been implemented. Half of the countries have entered into agreements to share watchlist information about known or suspected terrorists and to provide access to biographical, biometric, and criminal history data. By contrast, almost all of the 36 VWP countries have entered into an agreement to report lost and stolen passports. DHS, with the support of interagency partners, has established a compliance schedule requiring the last of the VWP countries to finalize these agreements by June 2012. Although termination from the VWP is one potential consequence for countries not complying with the information-sharing agreement requirement, U.S. officials have described it as undesirable. DHS, in coordination with State and Justice, has developed measures short of termination that could be applied to countries not meeting their compliance date. DHS has not completed half of the most recent biennial reports on VWP countries' security risks in a timely manner. According to officials, DHS assesses, among other things, counterterrorism capabilities and immigration programs. However, DHS has not completed the latest biennial reports for 18 of the 36 VWP countries in a timely manner, and over half of these reports are more than 1 year overdue. Further, in the case of two countries, DHS was unable to demonstrate that it had completed reports in the last 4 years. DHS cited a number of reasons for the reporting delays. For example, DHS officials said that they intentionally delayed report completion because they frequently did not receive mandated intelligence assessments in a timely manner and needed to review these before completing VWP country biennial reports. GAO recommends that DHS establish time frames for the regular review of cases of ESTA noncompliance and take steps to address delays in the biennial review process. DHS concurred with the report's recommendations.
You are an expert at summarizing long articles. Proceed to summarize the following text: Lyme disease was identified as a separate disease in 1977 because of a cluster of cases in children in Lyme, Connecticut, who were first thought to have juvenile rheumatoid arthritis. It was not until 1982, with the discovery of the causative bacterium, Borrelia burgdorferi, that Lyme disease could be defined by nonclinical observations. Carriers of Borrelia burgdorferi include the deer tick in the upper Midwest and Northeast and the western black-legged tick on the Pacific Coast, two areas where Lyme disease is considered to be endemic. Lyme disease symptoms generally appear 7 to 14 days after transmission, but this period may range from 3 to 30 days. Manifestations include musculoskeletal, nervous system, or cardiovascular irregularities that are not attributable to any other cause. However, some individuals may have no recognized illness or manifest only nonspecific symptoms, such as fever, headache, fatigue, and muscle pain. For more details on the definition of Lyme disease and on its diagnosis, prevalence, treatment, and prevention, see appendix I. Federal Lyme disease research programs are administered by two agencies within HHS. CDC funds laboratory and field research, surveillance, and education. CDC’s Lyme disease program, an effort of the National Center for Infectious Diseases, is housed at Fort Collins, Colorado, in the Division of Vector-Borne Infectious Diseases. NIH funds intra- and extramural basic and clinical research and promotes educational activities. NIH carries out its Lyme disease activities at several NIH institutes and centers, primarily NIAID, the National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), the National Institute of Neurological Disorders and Stroke (NINDS), and the National Center for Research Resources (NCRR). NIAID conducts clinical research related to Lyme disease at the Clinical Studies Unit on the NIH campus in Bethesda, Maryland, and laboratory research at the Rocky Mountain Laboratories in Hamilton, Montana. For more information on the roles of CDC and NIH, see appendixes II and III, respectively. The Lyme disease programs at both agencies have been reviewed by experts for their scientific merit. In 1994, CDC convened an ad hoc panel of outside experts to review its Lyme disease program. NIH’s Board of Scientific Counselors, an advisory panel composed of nonfederal experts, periodically reviews NIH’s intramural research programs, including those involving Lyme disease. This board reviewed NIH’s Lyme disease program in 1993 and 1998. Also at NIH, the Advisory Panel on the Clinical Studies of Chronic Lyme Disease, a panel composed of nonfederal researchers and patient advocates, provides guidance on Lyme disease-related clinical trials. It provided annual reviews beginning in 1996 and continuing through 1999. CDC and NIH have conducted a broad range of research and educational activities related to Lyme disease. CDC has instituted a surveillance system, helped to standardize diagnosis, and funded research on prevention and education, while initiating most recommendations made by expert review committees and related activities recommended in congressional appropriations committees’ reports. NIH has funded research on the basic nature of Lyme disease and on its diagnosis, treatment, and prevention, and initiated most related expert and congressional recommendations. CDC’s Division of Vector-Borne Infectious Diseases has conducted a broad range of Lyme disease activities consistent with its program plans. In 1990, it developed a Lyme disease surveillance case definition, approved by the Council of State and Territorial Epidemiologists for uniform national reporting of Lyme disease beginning in 1991. Using surveillance data, CDC conducted epidemiological and ecological studies of disease and tick distribution for many areas of the United States. CDC’s research focused on those areas in which Lyme disease is highly endemic, but some activities were conducted in areas in which Lyme disease may be emerging. For example, CDC conducted research in the south central United States to investigate the emergence of a disease similar to Lyme disease. In addition, CDC has developed a national map showing risk of infection based on geographic area. CDC’s laboratories have conducted basic research on diagnostic test development. In 1994, CDC, along with a group composed of representatives of academic research laboratories, state and federal public health agencies and organizations, and manufacturers of diagnostic tests, developed a two-step approach to testing for Lyme disease that was more accurate than individually performed diagnostic tests available at the time. The two-step approach was developed to detect new cases of Lyme disease. CDC, in collaboration with NIAID grantees and intramural scientists, is developing a single-step test that is intended to improve diagnostic accuracy. With regard to the prevention of Lyme disease, CDC has developed targeted ways of disseminating tick-killing pesticides, including feeding devices that apply the pesticides to deer and mice, which serve as tick hosts; has worked with community-based programs to educate high-risk communities on managing vegetation that can harbor ticks; initiated a cooperative research and development agreement with SmithKline Beecham Animal Health and SmithKline Beecham Biologics to identify and characterize proteins of potential value in the development of products for immunological protection against Lyme disease and for new and improved diagnosis; monitors the Vaccine Adverse Events Reporting System, along with the Food and Drug Administration (FDA), to continually evaluate the Lyme disease vaccine’s safety; and has developed written recommendations through its Advisory Committee on Immunization Practices for the administration of the Lyme disease vaccine. To educate the medical and patient communities, CDC has funded activities by professional groups and associations to develop diagnosis and treatment recommendations for both physicians and nurses; maintains an informational Web site for patients and health professionals and provides the public with educational materials; has provided training and funds to state and local health departments to improve surveillance and educational activities, including sponsoring conferences and workgroup meetings in 1993, 1994, 1998, and 1999 to update the Lyme disease community and help guide the future of CDC Lyme disease programs; and has disseminated the results of its Lyme disease research in hundreds of articles in peer-reviewed journals. CDC has been responsive to experts and Congress. It has initiated most Lyme disease-related activities recommended by expert reviewers. In 1994, three nonfederal reviewers evaluated CDC’s Lyme disease program and made 16 recommendations. For a list of these recommendations, see appendix IV. We found evidence that CDC initiated activities consistent with most of these recommendations. For one recommendation, concerning the expansion of physician education in the South, we found no evidence, although the agency did state that it conducted collaborative research with physicians in that part of the country. CDC initiated work on all Lyme disease activities recommended in House and Senate Appropriations Committees’ reports. From fiscal years 1991 through 1998, Congress made 12 such recommendations. (See app. IV.) NIH has supported a broad range of research, promoted educational activities, and improved research capacity related to Lyme disease. Most Lyme disease activities were funded by NIAID, the lead institute for Lyme disease. This work has been consistent with NIAID’s general goal for its Lyme disease program: to develop better means of diagnosing, treating, and preventing the disease. Much of the Lyme disease work performed at NIAID’s Rocky Mountain Laboratories and about 20 percent of NIAIDs Lyme disease grants to nonfederal researchers have been devoted to research on diagnostic methods. For example, it has developed a diagnostic test that can differentiate between those infected with Lyme disease and those who previously would have tested positive because they had been immunized with the Lyme disease vaccine. In addition, because ticks may carry more than one disease, NIH has supported research on the co-infection of Lyme disease patients with Babesia and Ehrlichia. NIAMS has also supported research on diagnostics and has developed a DNA-based diagnostic test for Borrelia burgdorferi, the bacterium responsible for Lyme disease. NIH has also supported research on the treatment of Lyme disease, with an emphasis on chronic Lyme disease. In 1996, NIAID scientists initiated research to identify the clinical characteristics of both acute and chronic Lyme disease. The same year, NIAID entered into a contract to determine the efficacy of antibiotic treatment of chronic Lyme disease. The treatment component of this study was terminated after a scheduled review at the end of fiscal year 2000 because of a finding of “no observed difference” in self-reported improvement between the treatment and the control groups. In 2000, NINDS initiated funding for a study of the neurological effects and treatment of chronic Lyme disease. With respect to prevention, NIH has funded research on the basic biology underlying the development of a vaccine for Lyme disease, later used by SmithKline Beecham to develop a Lyme disease vaccine; animal models for the development and testing of other potential Lyme disease vaccines; tick ecology and control; and the relationship between maternal Lyme disease and congenital abnormalities in newborns. NIH has produced educational materials and worked with other groups to sponsor conferences and workshops. For example, NIAID produced a 1996 fact sheet for physicians titled, “Tick-Borne Diseases: An Overview for Physicians,” and a 1998 pamphlet for patients titled, “Lyme Disease: The Facts, The Challenge”; maintains a Web site that provides information on diagnosis and NIAID activities related to Lyme disease; and has disseminated its Lyme disease research through over 100 scientific articles published by its researchers. NIH has responded to expert recommendations and to those of Congress. NIH implemented all recommendations related to the NIAID Lyme disease program made by the Board of Scientific Counselors, NIH’s external committee that reviews the intramural program. In 1996 and 1998, the reviewers evaluated the Lyme disease-directed efforts of Rocky Mountain Laboratories and the Clinical Studies Unit, and made six recommendations. For example, the reviewers recommended that the laboratories hire additional technical staff. NIH followed this recommendation and pursued activities consistent with all of the others. (For a list of the recommendations, see app. IV.) NIH has also initiated activities consistent with most of the recommendations of the Advisory Panel on the Clinical Studies of Chronic Lyme Disease. In 1996 and 1999, this panel of outside experts and advocates provided 15 recommendations regarding the NIAID intramural and extramural clinical studies on chronic Lyme disease. (See app. IV.) For example, the panel recommended using standardized neuropsychological tests in the intramural work. NIH implemented this and most of the other recommendations. A recommendation that it did not implement was to establish an unblinded oversight committee to review the placebo group of the clinical study conducted at the New England Medical Center. NIH did not believe that such an approach was warranted because procedures already in place were adequate to safeguard the welfare of all enrolled in the study. NIH pursued work on most Lyme disease activities recommended in House and Senate Appropriations Committees’ reports, including, for example, one concerning the avoidance of research duplication. (See app. IV for a list of these recommendations.) From fiscal years 1991 through 1998, Congress made 11 recommendations to NIH regarding Lyme disease. NIH pursued work on nine of those recommendations. NIH did not fully address the other two recommendations. One of these was a recommendation by the Senate Appropriations Committee to establish a pediatric Lyme disease program at NIAID’s Clinical Studies Unit at NIH’s Bethesda, Maryland, hospital. According to NIH, because of the invasive nature of the diagnostic tests required as part of the study planned at that facility, it was determined that including pediatric patients would not be appropriate. The second recommendation that NIH did not fully address, also by the Senate Appropriations Committee, was to consider funding a center in the Midwest or Southwest to conduct clinical trials of treatments that would otherwise not be tested. NIH officials told us that they convened a workshop intended to develop information for physicians on the diagnosis of, and therapy for, Lyme disease and issued a request-for- application for research on the diagnosis and treatment of Lyme disease. They also told us that, taken together, these efforts helped to build a base of knowledge and necessary critical mass so that, in the future, research centers might be a viable option. Funding related to Lyme disease has increased at both CDC and NIH from fiscal years 1991 through 2000. The CDC increase in allocations was about 7 percent during that period, from $6.9 million to $7.4 million in inflation- adjusted dollars. In spite of the slight increase for the entire period, the CDC allocations for Lyme disease declined prior to fiscal year 1998, when the allocation rose considerably, to $8.3 million in inflation-adjusted dollars. Since then, the allocations have again been declining. In contrast, the NIH increase in obligations has been steady and relatively large, at 99 percent. NIH obligations for Lyme disease have increased almost every year, from $13.1 million in fiscal year 1991 to $26.0 million in fiscal year 2000 in inflation-adjusted dollars. Total CDC allocations for Lyme disease programs increased during the period reviewed in spite of a downward trend in all years but one. Allocations, in inflation-adjusted dollars, decreased from $6.9 million in fiscal year 1991 to $5.8 million in fiscal year 1997, increased in fiscal year 1998 to $8.3 million, and have since declined to $7.4 million. (See fig. 1.) CDC’s allocations for Lyme disease have grown much more slowly than CDC’s budget authority for infectious diseases. This budget authority rose from $48.5 million to $175.6 million in inflation-adjusted dollars over the same period. The increase in allocations for Lyme disease in 1998 coincided with an increase in the CDC infectious diseases budget authority. During the period reviewed, Lyme disease allocations increased by 7 percent, while CDC appropriations and infectious diseases budget authority increased by 77 percent and 262 percent, respectively. (See fig. 2.) The large increase in Lyme disease allocations in fiscal year 1998 was used to expand grant funding. From fiscal years 1991 through 1997, CDC reported that grants accounted for 58 percent of Lyme disease funding for the Division of Vector-Borne Infectious Diseases, while program operations accounted for 42 percent. However, after the fiscal year 1998 Lyme disease allocations increase, program operations funding remained relatively flat and grant funding increased from $2.8 million to $5.1 million in inflation-adjusted dollars. In fiscal years 1999 and 2000, grants have accounted for 69 percent of Division of Vector-Borne Infectious Diseases Lyme disease allocations. Over 80 percent of this grant funding has been used for cooperative agreements with universities and public health laboratories, with the remainder going to foundations and other kinds of organizations. The most commonly funded cooperative agreements have been related to research on the diagnosis and on the origination and development of the disease or have involved activities related to surveillance, diagnosis, prevention, and education. In 2000, the Division of Vector-Borne Infectious Diseases had 24 full-time employees working on Lyme disease activities. Fourteen of those employees devoted 100 percent of their time to Lyme disease activities, and 10 employees spent from 10 to 90 percent of their time on Lyme disease. Total NIH obligations for Lyme disease activities in inflation-adjusted dollars increased from $13.1 million in fiscal year 1991 to $26.0 million in fiscal year 2000. (See fig. 3.) NIH Lyme disease obligations rose at a faster rate than overall NIH appropriations; NIH Lyme disease obligations rose 99 percent, while total appropriations for NIH rose 70 percent over the same period. (See fig. 4.) The majority of Lyme disease activities are funded by NIAID, but several other institutes have also funded Lyme disease research. During the period reviewed, NIAID Lyme disease obligations also rose at a faster rate than overall appropriations for NIAID. NIAID obligations for Lyme disease increased from $8.4 million to $18.2 million, or 116 percent, while overall appropriations for NIAID increased from about $1.1 billion to $1.8 billion over the decade, or 55 percent, in inflation-adjusted dollars. A portion of the increase in Lyme disease funding is related to an increase in the funding of chronic Lyme disease research, which has risen, in inflation-adjusted dollars, from $124,000 in fiscal year 1991 to $3.5 million in fiscal year 2000. The majority of NIH Lyme disease obligations were used to fund extramural grants and contracts, which primarily support scientists within universities, medical schools, hospitals, and research institutions. From fiscal years 1991 through 2000, NIH spent 15.3 percent of its Lyme disease budget on intramural research activities and the rest on extramural activities. However, in fiscal years 1999 and 2000, NIH reported increases in intramural funding to 23.4 percent and 26.0 percent of the Lyme disease budget, respectively. Much of these increases can be attributed to two new activities: a large intramural study on the diagnosis and treatment of human uveitis, an eye infection that can be a complication of Lyme disease, at the National Eye Institute in fiscal year 1999, and an increase in NIAID intramural research on Lyme disease in fiscal years 1999 and 2000. NIAID, NIAMS, and NCRR are the three NIH components to have funded Lyme disease-related activities every year from fiscal years 1991 through 2000. During the past 10 years, NIAID has provided an average of 69 percent of the total NIH Lyme disease obligations; however, total NIAID Lyme disease obligations increased in relation to the Lyme disease obligations of other institutes and centers. As NIAID Lyme disease obligations increased, NIAMS and NCRR Lyme disease obligations remained at around 20 percent and 5 percent of NIH Lyme disease obligations, respectively, and other institutes began funding small numbers of grants (fewer than five per year) partially related to Lyme disease. Out of its overall obligations for Lyme disease, NIH increased obligations for grants related to chronic Lyme disease and post-Lyme disease syndrome during the period reviewed. NIAMS awarded grants on chronic Lyme disease throughout the period reviewed. In fiscal year 1994, NIAID reported chronic Lyme disease grants totaling $745,692, increasing to $3.4 million in inflation-adjusted dollars in fiscal year 1999. The majority of this increase can be attributed to the NIAID clinical trials on chronic Lyme disease that started in fiscal year 1996. NIAID chronic Lyme disease obligations decreased to $1.5 million in inflation-adjusted dollars in fiscal year 2000. However, NINDS initiated a $1.2 million clinical trial on chronic Lyme disease in fiscal year 2000. The number of NIH staff working on Lyme disease grew during the period observed. The majority of NIH staff working on Lyme disease are in NIAID. NIAID’s Rocky Mountain Laboratories funds three Lyme disease researchers, and NIAID’s Clinical Studies Unit has one clinical researcher, plus staff. Both laboratories have added staff during the period observed. In addition, NIAID has funded a Program Officer for Lyme disease since 1993, to stimulate and oversee grants related to Lyme disease. NINDS, NIAMS, and the National Eye Institute report that they have one or two researchers who spend less than 5 percent of their time on Lyme disease. These researchers are in addition to the extramural researchers working on Lyme disease with NIH funding. We provided a draft of this report to the Department of Health and Human Services for review. The department stated that it had no comments. HHS’ response is reprinted in appendix V. HHS also provided technical comments, which we incorporated where appropriate. We will send copies of this report to the Secretary of Health and Human Services, the Director of the Centers for Disease Control and Prevention, the Acting Director of the National Institutes of Health, and others who are interested. If you have any questions or would like additional information, please call me at (202) 512-7119. Marcia Crosse, Donald Keller, William Hadley, and Roseanne Price made major contributions to this report. The initial stage of Lyme disease is usually marked by one or more of the following: fatigue, chills and fever, headache, muscle and joint pain, swollen lymph nodes, and a characteristic skin rash called erythema migrans. Late manifestations, possibly occurring weeks, months, or years after infection, include arthritis, nervous system abnormalities, and heart rhythm irregularities, but for some patients arthritis or nervous system abnormalities are the first and only signs. The infection is triggered by the bite of certain kinds of ticks. Ticks become infected with the bacterium Borrelia burgdorferi while feeding on an infected animal, particularly the white-footed mouse in the Northeast. It is estimated that by adulthood from 10 and 50 percent of ticks in endemic areas carry the disease. Ticks are most likely to transmit Borrelia burgdorferi while they are in the nymphal stage. Nymphs feed during the spring and summer months, when people are most active, and the nymphs’ small size typically allows them to go unnoticed and gives them ample time to feed and transmit the bacterium, a process that may take 2 or more days. According to the Centers for Disease Control and Prevention’s (CDC) surveillance case definition, a person must meet either of two criteria to be considered a “confirmed case” of Lyme disease. One criterion is to have the characteristic rash. The second is to have (1) at least one late manifestation of Lyme disease from a list of signs and (2) laboratory confirmation of infection by Borrelia burgdorferi, using recommended tests. It is not always easy to diagnose Lyme disease. The only definite confirmation of Lyme disease is the identification of Borrelia burgdorferi from a cultured sample. Although specimens can be biopsied, the procedure is invasive and requires a specific growth medium and observation period, making it impractical for most clinicians. In part because of these disadvantages, the CDC-organized Second National Conference on the Serologic Diagnosis of Lyme Disease, in 1995, recommended a two-step approach for the laboratory confirmation of Lyme disease. It consists of a sensitive Enzyme-Linked Immunosorbent Assay or indirect fluorescent-antibody test followed by a more specific Western Blot test. These tests measure the body’s immune response, but they do not directly detect the bacterium. As a result, vaccination, antibiotic use, co-infection, residual antibodies, and duration since the tick bite all can affect the accuracy of the tests. Even among those patients with a history of tick bite, the characteristic rash, and other characteristic symptoms, only about 30 percent are positive in the first weeks of infection using the CDC-recommended two-step approach. For that reason, CDC recommends that diagnostic tests be used as a confirmation only when Lyme disease is already suspected. Reported cases of Lyme disease currently account for more than 95 percent of all reported vector-borne illness in the United States. Public health departments, clinicians, and laboratories have reported over 122,651 cases since 1990. Significant risk of infection with Borrelia burgdorferi is found in only a select number of states. (See table 1.) Estimates of prevalence may be inaccurate for two reasons. First, although it is required, physicians may not report all cases of Lyme disease to CDC. Second, patients with abnormal Lyme disease symptoms may not be diagnosed as having Lyme disease. As a result, current diagnosis and reporting practices may account for only 36 percent of the actual cases.However, some research has shown that Lyme disease may be overdiagnosed in highly endemic areas. In the guidelines of the Infectious Diseases Society of America, treatment of Lyme disease ranges from 14 to 21 days of oral antibiotics for early disease without complications to a 2- to 4-week course of intravenous antibiotics, repeated once if necessary, for late-stage Lyme disease with particular manifestations. An untreated or inadequately treated infection can progress to late-stage complications. There are several different methods to protect against Lyme disease. CDC recommends that people active in endemic areas limit their exposure to tick-infested areas, spray their clothing with insect repellents, tuck in clothing, and make frequent skin checks. In addition, community prevention projects have addressed Lyme disease through reducing tick habitats and developing environmentally friendly methods of pesticide application. CDC does not recommend antibiotic treatment for a tick bite to prevent infection if there are no accompanying symptoms. In December 1998, the Food and Drug Administration (FDA) approved an application to license a vaccine for Lyme disease. The vaccine requires a series of three injections to achieve the maximum preventive effect. Results from a clinical trial conducted by the manufacturer suggest that the vaccine is 50 percent effective after two doses and 78 percent effective after three doses. FDA has approved the vaccine for patients between 15 and 70 years of age, and clinical trials for children younger than 15 have begun. CDC’s Advisory Committee on Immunization Practices recommends that only individuals who are at risk in endemic areas be vaccinated. In addition, it advises physicians not to administer the vaccine to persons with a history of treatment-resistant Lyme arthritis. The duration of immunity conferred by the vaccine is not known at this time, nor are safety and efficacy beyond the manufacturer’s 20-month clinical trial. The vaccine’s manufacturer has begun postmarketing trials to answer those questions, and other pharmaceutical companies are developing second-generation Lyme disease vaccines. The Centers for Disease Control and Prevention’s (CDC) Lyme disease program, an effort of the National Center for Infectious Diseases, conducts surveillance, diagnostic research, and education. The program is housed at Fort Collins, Colorado, in the Division of Vector-Borne Infectious Diseases’ Bacterial Zoonoses Branch. The branch has four sections responsible for Lyme disease activities: Epidemiology, Molecular Biology, Diagnostic Reference Laboratory, and Lyme Disease Vector. CDC provides Lyme disease funding to state and local health departments, universities, and nonprofit foundations. CDC has conducted Lyme disease activities with state public health departments, academic medical centers, advocacy groups, the Food and Drug Administration (FDA), the Department of Agriculture, the National Park Service, the National Aeronautics and Space Administration, and the Council of State and Territorial Epidemiologists. In addition, CDC has entered into cooperative research and development agreements with pharmaceutical and diagnostic test manufacturers. Specifically, the mission of CDC’s Lyme disease program is to develop and maintain national surveillance for Lyme disease; perform epidemiological studies, and provide epidemiological assistance for local and state health departments and to national and local agencies; conduct laboratory and field research for improving diagnosis, understanding the origin and development of the disease, and developing strategies to prevent and control Lyme disease and other related tick- borne diseases; provide consultation, education, and training for the public and health serve as a national and international Lyme disease reference and research center. In addition to the Lyme disease program, CDC maintains two other activities that relate to Lyme disease, the Vaccine Adverse Events Reporting System and the Advisory Committee on Immunization Practices. CDC and the FDA developed and implemented the Vaccine Adverse Events Reporting System in 1988 to track adverse events associated with vaccines. Patients, practitioners, and manufacturers are encouraged to report clinically significant adverse events that may be associated with vaccinations. An independent contractor, funded by CDC, is responsible for distributing and collecting forms for reporting adverse events and maintaining the database. CDC and FDA monitor the data to detect patterns in the type and severity of adverse events associated with vaccines. This information enables CDC to direct financial and technical assistance to public sector vaccine programs as needed. The Advisory Committee on Immunization Practices, a committee of external experts, provides advice and guidance about immunization to the Secretary of Health and Human Services, the Assistant Secretary for Health, and the Director of CDC. The committee develops written recommendations, subject to the approval of the Director of CDC, for the routine administration of new vaccines to pediatric and adult populations, along with schedules regarding the appropriate periodicity, dosage, and contraindications applicable to the vaccines. The committee also reviews and reports regularly on existing immunization practices and recommends improvements in national immunization efforts. The National Institutes of Health’s (NIH) Lyme disease program seeks to better understand the etiology of the disease and to develop better means of diagnosing, treating, and preventing it. NIH institutes and centers with funding related to Lyme disease include the National Institute of Allergy and Infectious Diseases (NIAID), National Institute of Arthritis and Musculoskeletal and Skin Diseases, National Institute of Neurological Disorders and Stroke, National Eye Institute, National Institute of Child Health and Human Development, Fogarty International Center, National Institute on Aging, National Institute of Mental Health, and National Center for Research Resources. NIH designated NIAID as the lead institute for Lyme disease research in 1992. The NIH Lyme Disease Coordinating Committee, which has met annually since 1992, was created to facilitate the coordination of NIH’s varied Lyme disease-related efforts. NIAID conducts clinical research related to Lyme disease at the Clinical Studies Unit on the NIH campus in Bethesda, Maryland, and laboratory research at the Rocky Mountain Laboratories in Hamilton, Montana. The Board of Scientific Counselors, an advisory panel composed of nonfederal experts, periodically reviews NIH’s intramural research programs. In addition, NIH provides funding for Lyme disease through extramural grants and contracts to universities, medical schools, and research laboratories. The National Advisory Allergy and Infectious Diseases Council oversees NIAID’s extramural program. The council performs grant review, provides policy advice to NIAID, reviews NIAID programs, and develops program announcements and recommendations for proposals. The Advisory Panel on the Clinical Studies of Chronic Lyme Disease, a panel composed of nonfederal researchers and advocates involved with issues related to Lyme disease, provides guidance throughout each intramural and extramural clinical trial. The Clinical Studies Unit began a clinical trial in 1996 to better understand the natural history of chronic Lyme disease and possible causes for persisting symptoms. The trial seeks a comprehensive clinical, microbiological, and immunological assessment of patients who have suspected chronic Lyme disease despite previous therapy with intravenous antibiotics. The investigators are enrolling patients and a variety of control groups in an effort to better understand the origin and development of chronic symptoms and to study further immunologic aspects of Lyme disease. Research at NIAID’s Rocky Mountain Laboratories is focused on the molecular changes that Borrelia burgdorferi undergoes as it is transmitted from the tick. One laboratory seeks to understand variations in the proteins and genes of the organism. A second laboratory seeks to understand the roles of specific genes and develop the basic genetic tools necessary to manipulate Borrelia burgdorferi at the genetic level. A third laboratory seeks to understand the changes and adaptations of the bacterium as it is transmitted during tick feeding. NIAID has also funded clinical trials on the treatment of chronic and late- stage Lyme disease at the State University of New York at Stony Brook and the New England Medical Center. The study at Stony Brook examines how well antibiotics work in reducing fatigue symptoms in patients with post-Lyme disease syndrome. For this study, the data have been collected and are being analyzed. The New England Medical Center study examined the safety and efficacy of two antibiotics for the treatment of patients with suspected chronic Lyme disease who may or may not test positive for Lyme disease on diagnostic tests. In November 2000, a Data Safety and Monitoring Board, an independent monitoring group of doctors and researchers, unanimously recommended that NIAID terminate the treatment component of the study. This was suggested because a planned interim analysis showed no statistically significant difference between the placebo and the treatment groups, and NIAID agreed. NIAID has extended the contract for 1.5 years, with additional funding, so that the investigators can continue to follow the study’s patients to monitor their health status and to obtain additional information that, in the future, could help to determine the underlying cause of chronic Lyme disease. The following tables provide expert recommendations and congressional appropriations committees’ recommendations made to the Centers for Disease Control and Prevention (CDC) and the National Institutes of Health (NIH) Lyme disease programs.
The Centers for Disease Control and Prevention (CDC) and the National Institutes of Health (NIH) have conducted an increasingly broad range of research and educational activities related to Lyme disease. CDC has instituted a system for the surveillance of Lyme disease, helped to standardize diagnostic testing, conducted and funded basic research on Lyme disease and on its prevention, and developed patient and practitioner educational materials. CDC has initiated most activities recommended by external reviewers and congressional appropriations committees regarding changes to its programs. NIH has conducted and funded basic research on Lyme disease and on its etiology, diagnosis, treatment, and prevention. In addition, NIH research is addressing two topics of particular interest to patient advocates--chronic Lyme disease and the occurrence of other tick-borne infections in Lyme disease patients. NIH has also responded to most expert recommendations and congressional recommendations. During the last 10 years, allocations for Lyme disease have increased slightly at CDC, and obligations for Lyme disease have increased significantly at NIH. CDC allocations for Lyme disease research and education have increased seven percent, from $6.9 million to $7.4 million in inflation-adjusted dollars from fiscal years 1991 through 2000. In contrast, the NIH increase in obligations for Lyme disease has been steady and relatively large, at 99 percent.
You are an expert at summarizing long articles. Proceed to summarize the following text: Congress created FDIC in 1933 to restore and maintain public confidence in the nation’s banking system. The Financial Institutions Reform, Recovery, and Enforcement Act of 1989 sought to reform, recapitalize, and consolidate the federal deposit insurance system. It created the Bank Insurance Fund and the Savings Association Insurance Fund, which are responsible for protecting insured bank and thrift depositors, respectively, from loss due to institutional failures. The act also created the FSLIC Resolution Fund to complete the affairs of the former FSLIC and liquidate the assets and liabilities transferred from the former Resolution Trust Corporation. It also designated FDIC as the administrator of these funds. As part of this function, FDIC has an examination and supervision program to monitor the safety of deposits held in member institutions. FDIC insures deposits in excess of $3.3 trillion for about 9,400 institutions. Together the three funds have about $49.5 billion in assets. FDIC had a budget of about $1.2 billion for calendar year 2002 to support its activities in managing the three funds. For that year, it processed more than 2.6 million financial transactions. FDIC relies extensively on computerized systems to support its financial operations and store the sensitive information it collects. Its local and wide area networks interconnect these systems. To support its financial management functions, it relies on several financial systems to process and track financial transactions that include premiums paid by its member institutions and disbursements made to support operations. In addition, FDIC uses other systems that maintain personnel information for its employees, examination data for financial institutions, and legal information on closed institutions. At the time of our review, about 7,000 individuals were authorized to use FDIC’s systems. FDIC’s acting CIO is the corporation’s key official for computer security. The objectives of our review were to assess (1) the progress FDIC had made in correcting or mitigating weaknesses reported in our calendar year 2001 financial statement audit and (2) the effectiveness of information system general controls. These information system controls also affect the security and reliability of other sensitive data, including personnel, legal, and bank examination information maintained on the same computer systems as the corporation’s financial information. Our evaluation was based on (1) our Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the integrity, confidentiality, and availability of computerized data; and (2) our May 1998 report on security management best practices at leading organizations, which identifies key elements of an effective information security program. Specifically, we evaluated information system controls intended to protect data and software from unauthorized access; prevent the introduction of unauthorized changes to application and provide segregation of duties involving application programming, system programming, computer operations, information security, and quality assurance; ensure recovery of computer processing operations in case of disaster or other unexpected interruption; and ensure an adequate information security management program. To evaluate these controls, we identified and reviewed pertinent FDIC security policies and procedures, and conducted tests and observations of controls in operation. In addition, we reviewed corrective actions taken by FDIC to address vulnerabilities identified in our calendar year 2001 audit. We performed our review at FDIC’s headquarters in Washington, D.C.; its computer facility in Arlington, Virginia; and FDIC’s Dallas regional office, from October 2002 through March 2003. Our review was performed in accordance with U.S. generally accepted government auditing standards. FDIC has made progress in correcting previously identified computer security weaknesses. Of the 41 weaknesses identified in our calendar year 2001 audit, FDIC has corrected 19 and is taking action intended to resolve the 22 that remain. FDIC has addressed key access control, application software, system software, and service continuity weaknesses previously identified. Specifically, FDIC limited access to certain critical programs, software, and data; reduced the number of users with physical access to computer facilities; enhanced its review procedures of system software changes; strengthened its procedures for reviewing changes to application expanded tests of its disaster recovery plan; and defined the roles and responsibilities of its information security officers. In addition to responding to previously identified weaknesses, FDIC established several other computer controls to enhance its information security. For example, it enhanced procedures to periodically review user access privileges to computer programs and data to ensure that access is granted only to those who need it to perform their jobs. Likewise, FDIC strengthened its physical security controls by establishing criteria for granting access to computer center operations, and developed procedures for periodically reviewing access to ensure that it remained appropriate. Further, FDIC enhanced its system software change control process by developing procedures requiring technical reviews of all system software modifications prior to their implementation. In addition, it established a process to periodically review application software to ensure that only authorized computer program changes were being made. FDIC also improved its disaster recovery capabilities by establishing an alternate backup site to support its computer network and related system platforms, and by conducting periodic unannounced walk-through tests of its disaster recovery plan. The following sections summarize the results of our review. Our “Limited Official Use Only” report details specific weaknesses in information systems controls that we identified, provides our recommendations for correcting each weakness, and indicates FDIC’s planned actions or those already taken for each weakness. An evaluation of the adequacy of this action plan will be part of our future work at FDIC. Although FDIC established many policies, procedures, and controls to protect its computing resources, the corporation did not always effectively implement them to ensure the confidentiality, integrity, and availability of financial and sensitive data processed by its computers and networks. In addition to the previously reported weaknesses that remain not fully addressed, 29 new information security weaknesses were identified during this review. The weaknesses identified included instances in which FDIC did not adequately restrict mainframe access, secure its network, or establish a complete program to monitor access activities. In addition, new weaknesses in other information system controls, including physical security, application software, and service continuity, further increase the risk to FDIC’s information systems. Collectively they place the corporation’s systems at risk of unauthorized access, which could lead to unauthorized disclosure, disruption of critical operations, and loss of assets. A basic management control objective for any organization is to protect data supporting its critical operations from unauthorized access, which could lead to improper modification, disclosure, or deletion. Organizations can protect this critical information by granting employees the authority to read or modify only those programs and data that they need to perform their duties and by periodically reviewing access granted to ensure that it is appropriate. Effective mainframe access controls should be designed to restrict access to computer programs and data, and prevent and detect unauthorized access. These controls include access rights and permissions, system software controls, and software library management. While FDIC restricted access to many users who previously had broad access to critical programs, software, and data, instances remained in which the access granted specific users was still not appropriate. A key weakness in FDIC’s controls was that it did not adequately limit user access, as described below. Nineteen users had access to production control software that would allow them to modify software outside the formal configuration control process. This risk was further heightened because FDIC was not maintaining audit logs of software changes. Without such logs, unauthorized software changes could be made to critical financial and sensitive systems, possibly without detection. This software was especially vulnerable because it could allow an unauthorized user to bypass security controls. Further, an excessive number of users had access to 14 of 19 production job control systems we reviewed, allowing them to obtain exact details of production programs and data, which could then be used to gather information to circumvent controls. An excessive number of users had access that allowed them to read user identifications (IDs) and passwords used to transfer data among FDIC production computer systems. With these IDs and passwords, the users could gain unauthorized access to financial and sensitive corporation information, possibly without detection. FDIC did not adequately restrict users from viewing sensitive information. For example, about 70 users had unrestricted read access to all information that the corporation printed from its mainframe computer. This included information on bank examinations, payroll and personnel data, legal reports, vendor payments, and security monitoring information. One reason for FDIC’s user access vulnerabilities was that the corporation, while making progress, still had not fully established a process for reviewing the appropriateness of individual access privileges. Specifically, FDIC’s process did not include a comprehensive method for identifying and reviewing all access granted to any one user. Such reviews would have allowed FDIC to identify and correct inappropriate access. In response, FDIC said that it has since taken steps to restrict access to sensitive resources. Further, the corporation stated that it has improved its audit logging of user access activities, enhanced its process for identifying and reviewing access granted, and further reduced access to the minimum necessary for users to perform their job functions. Network security controls are key to ensuring that only authorized individuals gain access to sensitive and critical agency data. Effective network security controls should be established to authenticate local and remote users. These controls include a variety of tools such as user passwords, intended to authenticate authorized users who access the network from local and remote locations. In addition, network controls provide safeguards to ensure that system software is adequately configured to prevent users from bypassing network access controls or causing network failures. Since our last audit, FDIC took major steps to secure its network through enhancements to its firewall and establishment of procedures to review contractor network connections; further, it recently implemented actions to review the effectiveness of network security controls. Nonetheless, weaknesses in the way the corporation configured its network servers, managed certain user IDs and passwords, and provided network services have not yet been corrected. One system was using a default vendor account with broad access that would allow the user to read, copy, modify, or delete sensitive network configuration files. Information on default vendor accounts is available in vendor-supplied manuals, which are readily available to hackers. With this ability, a malicious user or intruder could seriously disable or disrupt network operations by taking control of key segments of the network or by gaining unauthorized access to critical applications and data. A network service was not configured to restrict access to sensitive network resources. As a result, anyone—including contractors—with access to the FDIC network could obtain copies or modify configuration files containing control information such as access control lists and user passwords. With the ability to read, copy, or modify these files, an intruder could disable or disrupt network operations by taking control of sensitive and critical network resources. A key network server was not adequately configured to restrict access. As a result, anyone—again, including contractors—with connectivity to the FDIC network could copy or modify files containing sensitive network information. With this level of access, an unauthorized user could control key segments of the network. Further, FDIC did not adequately secure its network against known vulnerabilities or minimize the operational impact of a potential failure in a critical network device. Failure to address known vulnerabilities increases the risk of system compromise, such as unauthorized access to and manipulation of sensitive system data, disruption of services, and denial of service. In response to our findings, FDIC’s acting CIO said that the corporation had taken steps to improve network security. Specifically, he said that FDIC had removed the vendor default account, reconfigured network resources to restrict access, and installed software patches to secure against known vulnerabilities. A program to monitor access activities is essential to ensuring that unauthorized attempts to access critical programs and data are detected and investigated. Such a program would include routinely reviewing user access activity and investigating failed attempts to access sensitive data and resources, as well as unusual and suspicious patterns of successful access to sensitive data and resources. To effectively monitor user access, it is critical that logs of user activity be maintained for all critical processing activities. This includes collecting and monitoring activities on all critical systems, including mainframes, network servers, and routers. A comprehensive monitoring program should include an intrusion-detection system to automatically log unusual activity, provide necessary alerts, and terminate access. While FDIC has made progress in developing systems to identify unauthorized or suspicious access activities for both its mainframe and network systems, it still has not completed a program to fully monitor such activities. As a result, reports designed to provide security staff with information on network access activities, including information on unusual or suspicious access, were not available due to technical problems in producing them. Consequently, security staff and administrators did not have the information they needed to effectively monitor the network for unauthorized or inappropriate access. Further, FDIC was not monitoring the access of certain employees and contractors with access that allowed them to modify specific sensitive system software libraries that can perform functions that circumvent all security controls. While these users were granted these access privileges, FDIC did not maintain audit logs of access to ensure that only authorized modifications were made to these libraries. As a result, these users could make unauthorized modifications to financial data, programs, or system files, possibly without detection. According to the acting CIO, the corporation has taken action to improve its program to monitor access activities. This includes developing and implementing new reports for monitoring network access and initiating action to fully implement its intrusion-detection system. In addition to information system access controls, other important controls necessary to ensure the confidentiality, integrity, and availability of an organization’s system and data were ineffective at FDIC. These controls include policies, procedures, and techniques that physically secure data- processing facilities and resources, prevent unauthorized changes to application software, and effectively ensure the continuation of computer processing service if an unexpected interruption occurs. Although FDIC has implemented numerous information system controls, remaining weaknesses in these areas increase the risk of unauthorized disclosure, disruption of critical operations, and loss of assets. Physical security controls should be designed to prevent vandalism and sabotage, theft, accidental or deliberate alteration or destruction of information or property, and unauthorized access to computing resources. These controls involve restricting physical access to computer resources, usually by limiting access to the buildings and rooms in which these resources are housed, and periodically reviewing access granted to ensure that it continues to be appropriate based on criteria established for granting such access. FDIC has taken several actions to strengthen its physical security, including reducing the number of staff who have access to those areas where computer resources are housed. However, while it has established policies for granting access to its computer facilities and procedures for periodically reviewing the continued need for such access, it has not yet developed a process to ensure compliance with these policies and procedures. For example, while FDIC’s policy provides that contractor access may only be granted for up to 6 months, 24 of 126 contractors had access to FDIC’s computer center for periods exceeding 6 months, some for several years. Without a process to ensure compliance with established policies and procedures, FDIC cannot ensure that physical access to critical computer resources is adequately controlled. In response to our finding, the acting CIO, has since established additional controls to ensure compliance with its physical access policies relating to length of time access may be granted and maintenance of authorized access request forms. Further, FDIC recently filled a position whose duties specifically include providing daily compliance, monitoring, and oversight to ensure that physical access policies and procedures are properly followed. Standard application software change control practices prescribe that only authorized, fully tested, and reviewed changes should be placed in operation. Further, these practices provide a process for reviewing all software modifications made. This should include reviews of changes made to software used to link applications to computer data and programs needed to support their operations. While FDIC has implemented a procedure to review application software changes for evidence of unauthorized code, fraud, or other inappropriate actions, the procedure does not include a review of other types of changes, such as those made to software used to facilitate access to software files and data. As a result, unauthorized changes could be made that alter computer program logic. In response, FDIC has expanded its application software change process to include reviews of other software modifications, including those that facilitate access to files and data. Service continuity controls should be designed to ensure that when unexpected events occur, critical operations continue without interruption or are promptly resumed, and critical and sensitive data are protected. An essential element is up-to-date, detailed, and fully tested service and business continuity plans. To be effective, these plans should be understood by all key staff and to include surprise testing. FDIC has acted to enhance its service continuity program. For example, it (1) updated and conducted tests of its service continuity plan, (2) completed business continuity plans for all its facilities and conducted tests of these plans, and (3) established an alternate backup site to support its network and other computing resources. However, FDIC has not yet performed unannounced testing of its business continuity plan. Such tests are more realistic than announced tests and more accurately measure the readiness of staff for emergency situations. Further, FDIC had not ensured that the emergency personnel lists included in its business continuity plan are current. We identified 66 FDIC employees whose names were in the emergency personnel list but who had separated from FDIC, including 13 staff listed as key emergency team members. Without current emergency personnel lists, FDIC risks not being able to restore its critical business operations in a timely manner. FDIC has since established new procedures to ensure that emergency personnel lists remain current. FDIC officials said that they would incorporate unannounced testing of the business continuity plan into the 2003 operating plan, and would conduct these unannounced tests by December 31 of this year. The primary reason for FDIC’s continuing weaknesses in information system controls is that it has not yet fully developed and implemented a comprehensive corporate program to manage computer security. As described in our May 1998 study of security management best practices, a comprehensive computer security management program requires the following five elements, all essential to ensuring that information system controls work effectively on a continuing basis: a central security management structure with clearly defined roles and appropriate policies, procedures, and technical standards; periodic risk assessment; and an ongoing program of testing and evaluation of the effectiveness of policies and controls. We previously recommended to FDIC that it fully develop and implement a comprehensive security management program that includes each of these elements. FDIC has made progress in implementing a security management program. Specifically, it (1) established a central security management structure; (2) implemented security policies, procedures, and technical standards; and (3) enhanced security awareness training. However, the steps taken to address periodic risk assessment and ongoing testing and evaluation of policies and controls have not yet been sufficient to ensure continuing success. Central security management structure. FDIC has established a central security function and has appointed information security managers for each of its divisions, with defined roles and responsibilities. Further, it has provided guidance to ensure that security managers coordinate with the central security function on security-related issues. It has also developed the support of divisional senior management for the central security function. Appropriate policies, procedures, and technical standards. FDIC has updated its security policies and procedures to cover all aspects of the organization’s interconnected environment and all computing platforms. It has also established technical security standards for its mainframe and network systems and security software. Security awareness. Computer attacks and security breakdowns often occur because computer users fail to take appropriate security measures. FDIC has enhanced its security awareness program, which all employees and contractors are required to complete annually. It has also developed specialized security awareness training to address the specific needs of its security managers. Periodic risk assessment. Regular assessments, assist management in making decisions on necessary controls by helping to ensure that security resources are effectively distributed to minimize potential loss. And by increasing awareness of risks, these assessments generate support for the adopted policies and controls, which helps ensure that the policies and controls operate as intended. Further, Office of Management and Budget Circular A-130, appendix III, prescribes that risk be assessed when significant changes are made to the system but at least every 3 years. FDIC has not fully developed a framework for assessing and managing risk on a continuing basis. While it has taken some action, including developing a framework of assessing risk when significant changes are made to computer systems and providing tools for its security managers to use in conducting risk assessments, it has not developed a process for conducting these assessments. Our study of risk assessment best practices found that a process for performing such assessments should specify (1) how the assessments should be initiated and conducted, (2) who should participate, (3) how disagreements should be resolved, (4) what approvals are needed, and (5) how these assessments should be documented and maintained. In response, FDIC’s acting CIO said that the corporation is taking steps to develop risk assessment guidance. Testing and evaluation. A program that assesses the effectiveness of policies and controls includes processes for monitoring compliance with established information system control policies and procedures and testing the effectiveness of those controls. During the past year, FDIC has taken steps to establish such a program of testing and evaluation. Specifically, it has established a self-assessment program to evaluate information system controls and has implemented a program to monitor compliance with established policies and procedures that includes performing periodic reviews of system settings and tests of user passwords. Nonetheless, FDIC’s program does not cover all critical evaluation areas. Missing is an ongoing program that targets the key control areas of physical and logical access, segregation of duties, system and application software, and service continuity. In response, FDIC’s acting CIO said that the corporation is taking steps to establish an oversight program to cover its control environment that will include steps to assess areas such as access controls, segregation of duties, system and application software, and service continuity. Further, FDIC plans to address each of these areas as part of its evolving self-assessment process. Until a comprehensive program to monitor and test each of these control areas is in place, FDIC will not have the oversight needed to ensure that many of the same type of information system control weaknesses previously identified are not repeated. An effective ongoing comprehensive program to monitor compliance with established procedures can be used to identify and correct information security weaknesses, such as those discussed in this report. For example, a comprehensive process to review all access authority granted to each user to ensure that access was limited to that needed to complete job responsibilities could identify inappropriate access authority granted to users. A comprehensive program to regularly test information system controls can be used to detect network security weaknesses. For example, our technical reviews of network servers identified default system passwords in use that are readily known to hackers and could be used by them to gain the access needed to exploit the network and launch an attack on FDIC systems. Appropriate technical reviews of the network servers and routers can identify these types of exposures. FDIC has made progress in correcting information system control weaknesses and implementing controls, including limiting and reducing access, altering software change procedures, expanding testing of disaster recovery plans, and defining the roles and responsibilities of information security officers. Nonetheless, continuing and newly identified security weaknesses exist. FDIC has not adequately restricted mainframe access, sufficiently secured its network, or completed a program for fully monitoring access activity. Weaknesses in physical security, application software, and service continuity increase the level of risk. The effect of these weaknesses—including prior and current year—further increases the risk of unauthorized disclosure of critical financial and sensitive personnel and bank examination information, disruption of critical financial operations, and loss of assets. Implementation of FDIC’s plan to correct these weaknesses is essential to establish an effective information system control environment. The primary reason for FDIC’s continuing weaknesses in information system controls is that it has not yet been able to fully develop and implement a comprehensive program to manage computer security. While it has made progress in the past year in establishing key elements of this program—including a security management structure, security policies and procedures, and promoting security awareness—its systems will remain at heightened risk until FDIC establishes a process for assessing and managing risks on a continuing basis and fully implements a comprehensive, ongoing program of testing and evaluation to ensure policies and controls are appropriate and effective. Until FDIC takes steps to correct or mitigate its information system control weaknesses and fully implements a computer security management program, FDIC will have limited assurance that its financial and sensitive information are adequately protected from inadvertent or deliberate misuse, fraudulent use, improper disclosure, or destruction. To establish an effective information system control environment, in addition to completing actions to resolve prior year weaknesses that remain open, we recommend that the Chairman instruct the acting CIO, as the corporation’s key official for computer security, to ensure that the following actions are completed. Correcting the 29 information system control weaknesses related to mainframe access, network security, access monitoring, physical access, application software, and service continuity identified in our current (calendar year 2002) audit. We are also issuing a report designated for “Limited Official Use Only,” which describes in more detail the computer security weaknesses identified and offers specific recommendations for correcting them. Fully develop and implement a computer security management program. Specifically, this would include (1) developing and implementing a process for performing risk assessments and (2) establishing an effective ongoing program of tests and evaluations to ensure that policies and controls are appropriate and effective. In providing written comments on a draft of this report, FDIC’s Chief Financial Officer (CFO) agreed with our recommendations. His comments are reprinted in appendix I of this report. Specifically, FDIC plans to correct the information systems control weaknesses identified and fully develop and implement a computer security management program by December 31, 2003. According to the CFO, significant progress has already been made in addressing the identified weaknesses. We are sending copies of this report to the Chairman and Ranking Minority Member of the Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Minority Member of the House Committee on Financial Services; members of the FDIC Audit Committee; officials in FDIC’s divisions of information resources management, administration, and finance; and the FDIC inspector general. We will also make copies available to others parties upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-3317 or David W. Irvin, Assistant Director, at (214) 777-5716. We can also be reached by e-mail at [email protected] and [email protected], respectively. Key contributors to this report are listed in appendix II. In addition to the person named above, Edward Alexander, Gerald Barnes, Angela Bell, Nicole Carpenter, Lon Chin, Debra Conner, Anh Dang, Kristi Dorsey, Denise Fitzpatrick, David Hayes, Jeffrey Knott, Harold Lewis, Duc Ngo, Eugene Stevens, Rosanna Villa, Charles Vrabel, and Chris Warweg made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
Effective controls over information systems are essential to ensuring the protection of financial and personnel information and the security and reliability of bank examination data maintained bythe Federal Deposit Insurance Corporation (FDIC). As part of GAO's 2002 financial statement audits of the three FDIC funds, we assessed (1) the corporation's progress in addressing computer security weaknesses found in GAO's 2001 audit, and (2) the effectiveness of FDIC's controls. FDIC has made progress in correcting information system controls since GAO's 2001 review. Of the 41 weaknesses identified that year, FDIC has corrected or has specific action plans to correct all of them. GAO's 2002 audit nonetheless identified 29 new computer security weaknesses. These weaknesses reduce the effectiveness of FDIC's controls to safeguard critical financial and other sensitive information. Based on our review, mainframe access was not sufficiently restricted, network security was inadequate, and a program to fully monitor access activities was not implemented. Additionally, weaknesses in areas including physical security, application software, and service continuity further increased the risk to FDIC's computing environment. The primary reason for these continuing weaknesses is that FDIC has not yet completed development and implementation of a comprehensive program to manage computer security across the organization. FDIC has, among other things, established a security management structure, but still has not fully implemented a process for assessing and managing risk on a continuing basis or an ongoing program of testing and evaluating controls. The corporation's acting chief information officer has agreed to complete actions intended to address GAO's outstanding recommendations by December 31 of this year.
You are an expert at summarizing long articles. Proceed to summarize the following text: For years, auditors have reported long-standing weaknesses in DOD’s ability to promptly pay its bills and accurately account for and record its disbursements. Numerous of our and DOD Inspector General audit reports have cited deficiencies in management oversight, a weak internal control environment, flawed financial management systems, complex payment processes, delinquent and inaccurate commercial and vendor payments, and lax management of DOD’s travel card programs. Those deficiencies have resulted in billions of dollars in unrecorded or improperly recorded disbursements, over- and underpayments or late payments to contractors, and fraudulent or unpaid travel card transactions. DOD’s disbursement processes are complex and error-prone. Although DFAS is responsible for providing accounting services for DOD, military service and other defense agency personnel play a key role in DOD’s disbursement process. In general, military service and defense agency personnel obligate funds for the procurement of goods and services, receive those goods and services, and forward obligation information and receiving reports to DFAS. Separate DFAS disbursing offices and accounting offices then pay the bills and match the payments to obligation information. Several military services and DOD agencies can be involved in a single disbursement and each has differing financial policies, processes, and stand-alone, nonstandard systems. As a result, millions of disbursement transactions must be keyed and rekeyed into the vast number of systems involved in any given DOD business process. Also, transactions must be recorded using an account coding structure that can exceed 75 digits and this coding structure often differs—in terms of the type, quantity, and format of data required—by military service. DFAS’s ability to match disbursements to obligation records is complicated by the fact that DOD’s numerous financial systems may contain inconsistent or missing information about the same transaction. Input errors by DFAS or service personnel and erroneous or missing obligation documents are two of the major causes of inconsistent information. For calculating and reporting performance metrics related to payment recording errors, officials from the Comptroller’s office included the following categories. Unmatched disbursements—Payments that were made by a DFAS disbursing office and received by a DFAS accounting office but have not yet been matched to the proper obligation. Negative unliquidated obligations—Payments that have been matched to and recorded against the cited obligations but which exceed the amount of those obligations. Intransits—Payments that have not yet been received by the DFAS accounting office for recording and matching against the corresponding obligation. Suspense account transactions—Payments that cannot be properly recorded because of errors or missing information (e.g., transactions that fail system edit controls because they lack proper account coding) and are therefore temporarily put in a holding account until corrections can be made. For DOD to know how much it has spent and/or how much is still available for needed items, all transactions must be promptly and properly recorded. However, we reported as early as 1990 that DOD was unable to fully identify and resolve substantial amounts of payment recording errors. We also stated that DOD’s early reporting of these errors significantly understated the problems. For example, DFAS excluded $14.8 billion of intransits from its 1993 benchmark against which it measured and reported its progress in reducing recording problems in later years. In addition, DOD excluded suspense account transactions from its reporting of payment recording errors until as late as 1999. Finally, when negative unliquidated obligations, intransits, and suspense account transactions were reported, they were reported using net rather than absolute values. DFAS has overall responsibility for the payment of invoices related to goods and services supplied by commercial vendors. As part of a reorganization effort in April 2001, DFAS separated its commercial payment services into two efforts—contract pay and vendor pay. Contract pay handles invoices for formal, long-term contract instruments that are typically administered by the Defense Contract Management Agency (DCMA). These contracts tend to cover complex, multiyear purchases with high dollar values, such as major weapon systems. Payments for contracts are made from a single DFAS system— Mechanization of Contract Administration Service (MOCAS). For fiscal year 2001, DFAS disbursed about $78 billion for over 300,000 contracts managed in MOCAS. The vendor pay product line handles invoices for contracts not administered by DCMA, plus miscellaneous noncontractual payments such as utilities, uniforms/clothing, fuels, and food. Vendor pay is handled by 15 different systems throughout DFAS and, annually, DFAS personnel pay nearly 10 million vendor invoices in excess of $70 billion. In general, DOD makes vendor payments only after matching (1) a signed contractual document, such as a purchase order, (2) an obligation, (3) an invoice, and (4) a receiving report. If any one of these components is missing, such as an obligation not being entered into the payment system, payment of the invoice will be delayed. According to DOD officials, approximately 80 percent of payment delinquencies are due to the delayed receipt of receiving reports by DFAS from the military service activities. DOD implemented the current travel card program in November 1998, through a DOD task order with Bank of America. This was in response to the Travel and Transportation Reform Act of 1998 (P.L. 105-264), which modified the existing DOD Travel Card Program by mandating that all government personnel must use the government travel card to pay official travel costs (for example, hotels, rental cars, and airfare) unless specifically exempted. The travel card can also be used for meals and incidental expenses or to obtain cash from an automatic teller machine. The intent of the travel card program was to provide increased convenience to the traveler and lower the government’s cost of travel by reducing the need for cash advances to the traveler and the administrative workload associated with processing/reconciling travel advances. DOD’s travel card program, which is serviced through Bank of America, includes both individually billed accounts and centrally billed accounts. When the travel card is submitted to a merchant, the merchant will process the charge through its banking institution, which in turn charges Bank of America. At the end of each banking cycle (once each month), Bank of America prepares a billing statement that is mailed to the cardholder (or account holder) for the amounts charged to the card. The statement also reflects all payments and credits made to the account. For both individual and centrally billed accounts, Bank of America requires that the cardholder make payment on the account in full within 30 days of the statement closing date. If the cardholder—individual or agency—does not pay the monthly billing statement in full and does not dispute the charges within 60 days of the statement closing date, the account is considered delinquent. For individually billed accounts, within 5 business days of return from travel, the cardholder is required to submit a travel voucher claiming legitimate and allowable expenses, which must be reviewed and approved by a supervisor. DOD then has 30 days in which to make reimbursement. Although DOD, like other agencies, relies on its employees to promptly pay their individually billed accounts, DOD does have some tools to monitor travel card activity and related delinquencies, including Bank of America’s Web-based Electronic Account Government Ledger System (EAGLS). Using EAGLS, supervisors can obtain reports on their cardholders’ transaction activity and related payment histories. For the centrally billed accounts, the travel office at each military installation or defense agency must first reconcile the charges shown on the centrally billed travel charge card account with the office’s internal records of transportation requests. After reconciliation has been completed, the voucher is sent to DFAS for payment. Because the travel card program is fairly new, DOD does not have a long history of reporting statistics for delinquencies. However, in our previous reports and testimonies, we have reported that DOD’s individually billed delinquency rate is higher than that of other federal agencies. As of September 2002, DOD’s delinquency rate was approximately 7.3 percent, about 3 percent higher than other federal agencies. Among the military services, however, the Air Force had the lowest delinquency rate. As of September 2002, the Air Force delinquency rate was 4.8 percent, significantly lower than the rest of DOD. Even though the Air Force had lower numbers of delinquent accounts, we found that control environment weaknesses and breakdowns in key controls were departmentwide and that these deficiencies led to instances of potential fraud and abuse with the use of travel cards in all the military services. In 1998, DFAS developed its Performance Contract to focus on continued achievement of its mission to provide responsive, professional finance and accounting services to DOD. As part of this contract with DOD, DFAS defined its performance objectives and identified specific performance measurement indicators. DFAS managers—and sometimes staff—are rated and rewarded based on their ability to reach annual reduction goals for each indicator. Performance metrics are now calculated monthly and the DFAS Director and the DOD Comptroller regularly review the results. Section 1008 of the National Defense Authorization Act for Fiscal Year 1998 (P.L. 105-85) directed the Secretary of Defense to submit a biennial strategic plan for the improvement of financial management to the Congress. In conjunction with the plan, the DOD Comptroller decided to develop a performance measurement system—a set of departmentwide metrics that will provide clear-cut goals for financial managers to monitor their progress in achieving reform. To begin this effort, the Comptroller adopted many of the DFAS performance measurement indicators because the DFAS metrics program had been underway for some time and was reporting successes. For payment recording errors and commercial payment backlogs in particular, the Comptroller’s metrics used information gathered and tracked by DFAS for its performance management contract. The metrics cited in the Comptroller’s testimony represent only a few of the financial management performance metrics developed to date. From a comprehensive set, the detailed metrics will be rolled up into “dashboard” metrics that will provide the Secretary of Defense and the Congress with a quick measure of DOD’s status in relation to critical financial management goals. This effort is part of an even larger effort by DOD to develop programmatic metrics for all of its operations. In general, the definitions and methodologies for gathering the data used by DOD Comptroller officials to calculate the cited improvement percentages at the ending measurement date were either consistent with or better than those used at the beginning measurement date or for prior reporting on payment recording errors, commercial payment backlogs, and travel card payment delinquencies. We did find that the reported metrics overstated the rate of improvement in some areas because Comptroller officials included transactions that DFAS would not consider to be payment errors or because they chose an inappropriate comparison to measure travel card delinquencies. However, recalculation of the metrics after correcting for these factors still showed positive—although less dramatic—improvement trends. DOD has gradually improved its reporting of payment recording errors over the years. DOD is now including all known categories of payment errors— unmatched disbursements, negative unliquidated obligations, intransits, and suspense account transactions—in its definition and, except in the case of intransits, is using absolute rather than net amounts in its calculations. However, the reporting of payment recording errors may not be complete. For example, work that we have performed on closed DOD accounts and on unliquidated obligations indicates that recording errors are not always identified or resolved appropriately. DFAS agrees that to properly manage and improve its payment processes, it must have a complete universe of payment recording errors. Therefore, DFAS personnel are currently working to determine whether the error categories identified to date contain all of the relevant transactions and whether other error categories exist. While the same basic methodologies were used for calculating the cited metrics at the beginning and ending measurement dates, Comptroller officials overstated DOD’s improvement percentages because the October 2000 calculation included transactions that did not meet the DFAS criteria for being considered payment errors while the October 2001 calculation did not include them. First, the October 2000 calculation for payment recording errors included all transactions that were being held in DFAS suspense accounts; however, DFAS uses certain suspense accounts to record collection transactions, such as accrued payroll taxes and receipts for the sale of military property, that are held temporarily before being distributed to the proper government agency or DOD entity. The transactions in these accounts, which DFAS labels as “exempt suspense accounts,” do not represent payment recording errors. In fiscal year 2001, DFAS Cleveland changed its practice of charging payroll taxes to suspense accounts and began appropriately accruing taxes in an accrued payroll tax account. As a result, payment recording errors as calculated by Comptroller officials at October 2001 were reduced by an estimated $7.5 billion—the amount of DFAS Cleveland’s accrued payroll taxes—even though payment processes were not improved at all. Second, in fiscal year 2001, DFAS Indianapolis corrected a reporting error by a defense agency that had been double-counting transactions in its suspense accounts. This resulted in an estimated $1.1 billion reduction from amounts reported in October 2000, even though no payment recording errors were corrected or resolved. In addition, Comptroller officials measured intransits using net rather than absolute values and did not adopt DFAS criteria for aging intransit and suspense account transactions. These practices affected the balances used to calculate the metrics at both the beginning and ending measurement dates. First, net rather than absolute values were used to calculate intransits at October 2000 and October 2001, which understated both balances by approximately $4 billion. When net amounts are reported, collections, reimbursements, and adjustments are offset against disbursements, thus reducing the balance of intransit transactions. Second, the reported metrics included all intransit and suspense account transactions at October 2000 and October 2001 regardless of their age. However, DOD allows 60 days to 180 days for the normal processing of various payment transactions because of systems limitations and the complexity of the department’s processes and, in line with these criteria, DFAS’s metrics related to payment errors only consider aged intransit and suspense account transactions. By not using DFAS’s criteria for aged intransit and suspense account transactions, the Comptroller officials overstated the balances of payment recording errors by approximately $6 billion at the beginning and $5 billion at the ending measurement dates. Figure 1 illustrates the effect on improvement rates of (1) eliminating exempt suspense accounts and double counting, (2) using DFAS’s criteria for aged intransits and suspense amounts, and (3) using absolute rather than net amounts for intransits. Our recalculation shows an overall 46 percent reduction in payment recording errors between October 2000 and October 2001 rather than the 57 percent reduction reported by the Comptroller; however, the reductions are still significant and the trend is still overwhelmingly positive. Between October 2001 and September 2002, DOD continued to report that it had reduced payment recording errors. Comptroller officials calculated a 26 percent reduction during that period while our recalculation shows a 22 percent reduction. The metrics for commercial payment backlogs (delinquent unpaid invoices) at April 2001 and October 2001 were calculated using consistent definitions and methodologies. An invoice was considered delinquent if payment was not made within the time frame established by the contract terms (e.g., by the 15th day after the invoice date) or, if no time frame was specified, on or before the 30th day after a proper invoice was received. DFAS reported information on delinquent invoices to Comptroller officials monthly using standardized input sheets. The total backlog percentages were then calculated by dividing the number of delinquent invoices outstanding by the total number of invoices on hand. According to the DOD Comptroller’s metrics, delinquent invoices for vendor pay decreased by 41 percent from April 2001 through October 2001 while delinquent invoices for contract pay decreased by 32 percent during that same period. Because DFAS officials stated that the decrease cited in the Comptroller’s metrics was primarily due to intensive focus placed on decreasing the backlog of delinquent vendor invoices, our review concentrated on vendor pay issues. For the travel card metrics, consistent definitions and methodologies were used to gather the data and calculate the improvement percentages cited by the DOD Comptroller for January 2001 and December 2001. Travel card payments were considered delinquent if they were not paid within 60 days of the monthly statement closing date. Even though the terms of the travel cardholder’s agreement with Bank of America requires payment of the statement within 30 days of the statement closing date, it is industry practice to allow 60 days before the invoice is considered delinquent and interest is charged. Comptroller officials used a standard industry practice to calculate the travel card delinquency rates— the total dollar amount outstanding for 60 days or more was divided by the total balance outstanding. While the definitions and methodology were consistent with standard practices, the metrics comparison of delinquencies for individually billed accounts in January to those in December could be misleading. As our recent work shows, individually billed travel card delinquencies have been cyclical, with the highest delinquencies occurring in January and February. Therefore, the most useful metrics would compare same month to same month, for example, January to January or December to December. If the Comptroller officials had compared individual travel card delinquencies at January 2001 to those at January 2002, the reported decrease would have been 16 percent as opposed to 34 percent. DFAS only provided us with internally generated summary-level data that reconciled to the totals reported for payment recording errors and commercial pay backlogs. DFAS did not provide us with detailed transaction-level data that supported those metrics. As a result, we were unable to test whether (1) all payment recording errors and delinquent commercial payments were properly included in the metrics and (2) the actions taken to resolve or correct payment recording errors were appropriate. For individual and centrally billed travel card delinquencies, we were able to obtain independent verification from a source outside DOD that supported the Comptroller’s metrics. Although we could not audit the reported metrics for all of the measured areas, we verified that DFAS and other DOD organizations have made numerous policy, procedure, and systems changes that would support an overall trend toward improved performance. For payment recording errors and commercial payment backlogs, perhaps the most significant change has been DOD’s inclusion of performance measures in its contracts with DFAS. The performance contract and an accompanying data dictionary provide specific, measurable reduction goals, which DFAS management— and in some cases staff—are held accountable for reaching. The resulting focus has fostered innovative process and systems improvements as well as better communication among the parties involved in preventing or resolving these problems. For example, DFAS holds monthly videoconferences with its centers and field sites to discuss progress and any impediments to reaching that period’s goals. In general, DFAS centers did not maintain history files of all the transactions that were not promptly matched with obligations, created negative unliquidated obligations, were in transit longer than allowable, or were in suspense accounts during the period October 2000 through October 2001—information that is necessary in order to verify the completeness and accuracy of the reported metrics. DFAS officials explained that the detailed data supporting the reported monthly totals are compiled by hundreds of DFAS field sites using numerous accounting systems and there is no specific requirement for the field sites to save the data. While some DFAS officials believe that it would be possible to recreate transaction-level detail to support month-end totals, the task would be extremely onerous and time consuming. Although we were unable to verify through audit procedures the accuracy of the reductions reported by the Comptroller, we did reconcile summary- level information provided by the DFAS centers to the metric amounts. We also verified that DFAS has made numerous policy and systems improvements that support a continuing trend of reductions in payment recording errors as illustrated by the metrics in figure 2. DFAS has been working to reduce payment recording errors for more than a decade. In the late 1990s, DFAS consolidated most of its disbursing and accounting functions from 300 defense accounting offices into 5 centers, in large part to help streamline the payment recording process. DFAS has also been working with other DOD components to consolidate or replace about 250 outdated and nonintegrated financial and accounting systems. While the systems effort will take many years and must be accomplished within DOD’s overall plan for systems development and integration, DFAS has made, and continues to make, improvements in the policies and systems tools available to DFAS personnel for preventing and correcting payment recording errors. Since October 2000, DFAS has made several policy changes that have affected the payment recording process. In January 2001, DOD revised its official guidance to clarify and strengthen policies related to the prompt (1) recording of disbursements and obligations and (2) resolution of payment recording errors. If the military services or DOD components have not provided DFAS with accurate obligation information within specified time frames, the revision gave DFAS the authority to record obligations in order to resolve individual unmatched disbursements, negative unliquidated obligations, and certain suspense account transactions. DFAS also expanded its prevalidation policy, which it claims has been key to reducing payment errors associated with commercial contracts. Prevalidation requires that DFAS personnel ascertain that there is a valid obligation recorded in the accounting records before making a payment. Between November 2000 and October 2001, DFAS lowered the dollar threshold amount for transactions requiring prevalidation from $100,000 to $25,000. DFAS developed new systems tools for communicating accounting information among its centers and field locations that have reduced the amount of time DFAS personnel need to match disbursements to obligations. For example, since the late 1990s DFAS has implemented the following. Electronic data access capability, which provides web access to contract, billing, and other documents pertinent to the payment recording process. Electronic access to these documents enables users to obtain information more quickly than in the past, when many documents were stored in hard-copy format. Phase 1 of the Defense Cash Accountability System (DCAS), which provides a standardized, electronic means for DFAS centers to report expenditure data for transactions involving more than one military service (cross-disbursements). Prior to DCAS, the centers had different systems and formats for reporting this information to one another and to Treasury, a situation that increased the complexity of recording and matching cross-disbursements. According to DFAS officials, DCAS reduced the cross-disbursement cycle time from 60 days to 10 days. The Standard Contract Reconciliation Tool (SCRT), which provides DFAS personnel a consolidated database for researching commercial contract records. Prior to SCRT, locating and accessing these records was difficult due to the variety of accounting, contracting, and entitlement systems involved. DFAS centers have also developed individual applications that have improved payment processes. For example, DFAS Indianapolis implemented an Access “Wizard” application to automate the process of matching intragovernmental expenditure transactions to obligation records. The program also enables center staff to identify transactions that have not been processed within 30 days so they can follow up with field accounting personnel. DFAS was unable to provide detailed transaction-level data that supported the metrics related to vendor payment backlogs—the most significant contributor to the reductions. DFAS only maintained summary-level data that were generated by the 23 DFAS field sites. Using standard definitions and standard summary spreadsheets, DFAS personnel collected the summary information monthly through data calls to the more than 15 different systems that track DOD vendor pay backlog information. As a result, we were only able to confirm that the summary information provided by DFAS reconciled to the amounts reported by the Comptroller. We were unable to verify by audit the accuracy or completeness of that data. DFAS management has focused on reducing commercial payment backlogs since fiscal year 2000 and this focus is continuing through the present. According to its performance contracts, DFAS’s goal was to reduce the backlog by 15 percent per year beginning in fiscal year 2000 from a baseline of 48,000 delinquent invoices. In April 2001, DFAS centralized operational control of contract pay and vendor pay under one executive, who was given ultimate responsibility for meeting these performance goals. DFAS also made site-specific procedural changes to reduce the backlog of vendor payments. These included hiring temporary contract and permanent staff in key sites, such as forecasting when civilian employees in Europe would be taking vacation and then staggering vacation leave and/or hiring temporary help (e.g., in Germany, every civilian employee has 6 weeks of annual leave, which is usually taken during the summer); and forming partnerships with the military services and defense agencies to improve their processing time for receiving reports, since DFAS must match the receiving report to the invoice before payment can be made. DFAS credits these and other changes for the continued reduction of the backlog of delinquent invoices. Figure 3 below illustrates the trend in the reduction of outstanding delinquent vendor invoices compared to the total number of invoices on-hand. We were able to verify the reductions cited by the Comptroller in individual and centrally billed travel card delinquencies. We obtained travel card delinquency information from an independent source, the General Services Administration (GSA), that supported the Comptroller’s metrics. GSA receives information from individual travel card vendors, such as Bank of America, and prepares a monthly summary report for DOD that documents individual and centrally billed travel card delinquencies by military service or defense agency. We compared the GSA data to the cited metrics and verified that the reported reductions in travel card delinquencies were accurate. As with the other problem areas, DOD credits the decrease in travel card delinquency rates in both individual and centrally billed accounts primarily to increased management attention. For the centrally billed accounts, DOD has attributed the initial high delinquency rates to problems in transferring the travel card contract from American Express to Bank of America. When Bank of America was given the contract, its on-line travel information system, EAGLS, was not fully operational and therefore was unable to accurately process all of the travel data being transferred by American Express. Because EAGLS contained incorrect account numbers, invoice information, and billing addresses, DOD agency program coordinators did not have the information necessary to determine which accounts were delinquent, in suspense, or canceled. While DOD and Bank of America officials were working jointly to identify and resolve the problems, centrally billed invoices became backlogged. Once the problems were resolved, DOD was able to reduce the backlog. As of December 31, 2002, DOD’s centrally billed delinquency rate was 1.5 percent, well below fiscal year 2002’s proposed goal of 3.0 percent and equal to the delinquency rate for other federal agencies. Figure 4 below shows the centrally billed delinquency rates from January 2001 through December 2002. For individual travel cards, our recent work also supports the improved delinquency rates being reported by DOD. During the past year, we reported on the travel card programs for all three military services. In general, we found that the military services, in particular the Air Force, have given delinquencies greater attention and have used travel card audits to identify problems and needed corrective actions. We reported that all of the services are now holding commanders responsible for managing the delinquency rates of their subordinates. For example, Air Force management holds monthly command meetings where individual travel card delinquencies are monitored and briefed. The individual services have also implemented new programs to help reduce delinquencies, including the following. In January 2003, the Army established two goals of not more than 4.5 percent of dollars delinquent and not more than 3 percent of accounts delinquent. The Navy has established a similar goal of no more than 4 percent delinquent accounts. The Air Force is providing financial training to all inductees that includes developing a personal budget plan, balancing a checkbook, preparing a tax return, and understanding financial responsibility. The training also covers the disciplinary actions and other consequences of financial irresponsibility by service members. The Navy has developed a three-pronged approach to address travel card issues: (1) provide clear procedural guidance to agency program coordinators (APCs) and travelers that is available on the Internet, (2) provide regular training to APCs, and (3) enforce the proper use and oversight of the travel card by using data mining to identify problem areas and abuses. In January 2003, the Army issued two directives to its major commanders, which address a range of policy requirements, to include: (1) training for APCs and cardholders, (2) monthly review of cardholder transactions, (3) exempting and/or discouraging the use of the card for en route travel expenses associated with deployments, and (4) prohibiting use of the card for travel expenses associated with permanent change of station moves. In addition, DOD has implemented a number of departmentwide programs to improve the individually billed travel card program. Beginning in November 2001, DOD began a salary and military retiree pay offset program for delinquencies—similar to wage garnishment. In March 2002, the Comptroller created a Credit Card Task Force to address management issues related to the purchase and individually billed travel card programs. On July 19, 2002, the DOD Comptroller directed the cancellation of (1) inactive travel charge card accounts, (2) active travel card accounts not used in the previous 12 months, and (3) travel card accounts for which the bank cannot identify the cardholders’ organization. DOD is also encouraging individual cardholders to elect to have all or part of their travel reimbursement sent directly by DFAS to Bank of America—a payment method that is standard practice for many private sector employers. The Congress has recently addressed this issue in section 1008(a) and (b) of the National Defense Authorization Act for Fiscal Year 2003, which provides the Secretary of Defense the authority to require use of this payment method. According to DOD, about 32 percent of its individually billed cardholders elected this payment option for fiscal year 2002. As a result of these and other actions, DOD has been able to sustain reduced delinquency rates between October 2002 and December 2002, as illustrated in figure 5 below. However, DOD still needs to do more to address the underlying causes of the problems with its travel card program. In a recent testimony, we concluded that actions to implement additional “front-end” or preventative controls are critical if DOD is to effectively address the high delinquency rates and charge-offs, as well as potentially fraudulent and abusive activity. As a result of our work on travel cards, the Congress included a provision in the Department of Defense Appropriations Act for Fiscal Year 2003 requiring the Secretary of Defense to evaluate whether an individual is creditworthy before authorizing the issuance of any government charge card. If this requirement is effectively implemented, DOD should continue to improve delinquency rates and reduce potential fraud and abuse. The metrics that the DOD Comptroller highlighted in the March 2002 hearing relate to areas that have received considerable congressional and audit attention. As discussed earlier, the metrics program increased management focus on these problem areas and led to improvements in policies, processes, and—in a limited way—systems. While some of the cited metrics could be effective indicators of short-term financial management progress, assuming they could be verified, others are not necessarily good indicators, particularly if taken alone. In addition, continued financial management progress will require additional actions. For example, the military services and other defense agencies are key contributors to preventing and resolving payment recording errors and commercial payment delinquencies but they do not have the same incentives to improve their performance in these areas. Also, because DFAS lacks modern, integrated financial management systems, preventing and resolving payment delinquencies and errors require intensive effort day after day by DFAS and other DOD organizations, which could be difficult to sustain. The cited metrics for individual travel card delinquencies and payment recording errors could be effective indicators of financial management improvement. For payment recording errors, continuing reductions would indicate better controls over obligation, disbursement, and collection processes and that, as a result, DOD is less prone to fraud, waste, or abuse of appropriated funds. Monitoring the delinquency rates for individual travel card payments would provide DOD with an early indication that employees may be abusing their cards (i.e., using the cards for personal purchases) or having credit problems. However, improved delinquency rates do not necessarily indicate improved financial management of centrally billed travel cards or commercial payments. In fact, by placing too much emphasis on paying bills promptly, DOD staff may be tempted to shortcut important internal control mechanisms that are meant to ensure that the goods and services being paid for were properly authorized and actually received. We and DOD auditors have issued several reports on the improper use of individually billed travel cards at DOD and on over- and underpayments to DOD contractors but are just beginning work to identify and evaluate the adequacy of DOD policies, procedures, and controls related to purchases from vendors and centrally billed travel cards. As a result of these audits, we will likely recommend additional metrics related to program performance and internal controls for monitoring performance in these areas. Measures such as the ones discussed in this report may be useful in the short term but may not be appropriate once DFAS has reengineered its business processes and modernized its systems. As DFAS and the military services develop integrated and/or interfaced financial management systems, many of the problems related to transaction recording errors should be eliminated. Based on the recent work we performed for your committee related to DOD’s enterprise architecture, however, these new systems are years away from implementation. Because DFAS lacks modern, integrated financial management systems, preventing and resolving payment delinquencies and errors require intensive effort day after day by DFAS and military service staff. As a result, DFAS has indicated that much of the reported progress to date is sustainable only if its workload is not significantly increased or its staffing significantly decreased. Until new systems and reengineered processes are in place, DOD can take a number of steps to help maintain improvements in these areas. First, continued leadership and focus by top management will be a major factor in the sustainability of progress made to date. Second, because DFAS alone cannot resolve DOD’s payment recording problems or payment delinquencies, integrated metrics programs across DOD will be important. As noted earlier in this report, while the military services and other defense agencies play key roles in obligating DOD funds, preparing obligation documents, receiving and preparing billing documents, preparing receiving reports, and recording transaction information into accounting systems, these organizations do not currently have complementary metrics programs. Thus the military services and defense agencies are not measured on the accuracy and timeliness of their payment processes even though their assistance is necessary for DFAS to make improvements and resolve problems. For example, commercial payment backlogs were largely due to failure by the military services in providing receiving reports to DFAS, yet service delays were not being measured. DOD is currently developing a departmentwide, balanced program of metrics that is intended to align with its strategic goals, focus on results, and achieve auditable reports. As contemplated, DFAS, the military services, and other defense agencies will all be supporting players in this program. From the individual performance measurement programs of the military services, defense agencies, and DFAS, certain metrics will be selected and reported to the top levels of DOD management for evaluation and comparison. In this scenario, it is important that DOD properly and consistently calculate and report the selected metrics and that the military services, other agencies, and DFAS develop integrated metrics programs to assist in identifying, measuring, and resolving crosscutting issues. As the cited metrics demonstrate, DOD can make meaningful, short-term progress toward better financial management while waiting for long-term solutions, such as integrated financial systems. Leadership, real incentives, and accountability—hallmarks of a good performance measurement program—have brought about improvements in DFAS policies and processes. The cited metrics are also serving as important building blocks for DOD’s current efforts to develop a departmentwide performance measurement system for financial management. However, before the payment recording error and commercial payment backlog metrics can be relied upon for decision-making purposes, they must be properly defined and correctly measured, and linked to the goals and performance measures of other relevant DOD organizations. In addition, because the reported improvements depend heavily on the day- to-day effort of DFAS staff, sustaining the progress may be difficult if DFAS has significant workload increases or staff decreases. DOD systems do not provide the transaction-level support needed to verify the accuracy and completeness of many of its selected metrics. However, because DOD is currently working on developing an enterprisewide system architecture to guide its future systems development and implementation strategies, we are not making any recommendations in this report related to improving the underlying business systems. We did identify several steps that DOD could take now to improve the reported metrics. We are recommending that the DOD Comptroller use definitions and criteria that are consistent with DFAS definitions and criteria when calculating and reporting metrics related to payment recording errors, measure improvements in individually billed travel card delinquencies by using same month to same month comparisons, and work with the military service Assistant Secretaries for Financial Management to develop performance measures for the military services and other defense agencies in areas for which there is shared responsibility, in order to complement the DFAS metrics program. In written comments on a draft of this report (see appendix II), the Under Secretary of Defense (Comptroller) stated that the department concurred with our recommendations and described actions to address them. The department also provided several technical comments, which we have incorporated in the report as appropriate. We are sending copies of this report to other interested congressional committees; the Secretary of Defense; the Under Secretary of Defense (Comptroller); the Director, Defense Finance and Accounting Service; and the Assistant Secretaries for Financial Management (Comptroller) for the Army, the Navy, and the Air Force. Copies will be made available to others upon request. Please contact me at (202) 512-9505 or [email protected] if you or your staff have any questions about this report. Other GAO contacts and key contributors to this report are listed in appendix III. As requested by the Chairman and Ranking Minority Member of the Subcommittee on Readiness and Management Support, Senate Committee on Armed Services, we undertook an assessment of the consistency, accuracy, and effectiveness of certain DOD-reported metrics related to payment recording errors, commercial payment backlogs, and delinquent travel card payments. Specifically, our objectives were to determine whether (1) the cited performance measures were applied and calculated in a manner consistent with previous reporting on payment delinquencies and recording errors, (2) the cited improvement data were properly supported and represent real improvements in performance, and (3) the metrics are effective indicators of short-term financial management progress. To complete this work, we visited DOD Comptroller offices and DFAS centers in Arlington, Cleveland, Columbus, Indianapolis, and Denver where we did the following. Gathered, analyzed, and compared information on how payment recording errors, commercial payment backlogs, and travel card delinquencies were defined, calculated, and reported both in the past and for the cited metrics. Reviewed GAO, DOD IG, and other service auditors’ reports for the past 10 years. Reviewed DOD consolidated financial statement reporting of payment recording errors over the last 10 years. Reviewed DOD policy for maintaining financial control over disbursement, collection, and adjustment transactions. This policy specifically describes the requirements for researching and correcting payment recording errors. Obtained and analyzed the underlying summary spreadsheets from DFAS that were the information source for the Comptroller officials’ calculations for payment recording errors and commercial pay backlogs. DFAS gathers this information monthly through data calls from numerous systems used to process and account for payments. Although we requested the underlying detailed transaction-level data supporting the spreadsheets so that we could perform audit tests, we were unable to obtain the detail-level data. Obtained and analyzed the underlying summary spreadsheets from DFAS that were the information source for the Comptroller officials’ calculations for travel card delinquencies. Obtained independent summary data for travel card delinquencies from GSA and compared amounts to Comptroller-reported metrics. Interviewed center personnel about process and system improvements and gathered and analyzed relevant output that demonstrated the results of those changes. Our review of new systems tools and purported systems improvements was limited: we did not validate whether systems changes followed appropriate requirements or whether they resulted in the production of reliable financial information. Obtained explanations from officials from the Office of the Secretary of Defense regarding the metrics program and assessed whether the cited metrics are effective indicators of short-term financial management progress. The data in this report are based on DFAS records. With the exception of travel card delinquency rates, we were unable to independently verify or audit the accuracy of these data. We performed our work from June 2002 to February 2003 in accordance with U.S. generally accepted government auditing standards. We received written comments on a draft of this report from the Under Secretary of Defense (Comptroller). These comments are presented and evaluated in the “Agency Comments and Our Evaluation” section and are reprinted in appendix II. We considered technical comments from the department and incorporated them as appropriate but did not reprint them. Staff making key contributions to this report were Rathi Bose, Steve Donahue, Diane Handley, Fred Jimenez, and Carolyn Voltz. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
The Department of Defense (DOD) has historically been unable to accurately account for and record its disbursements. In March 2002, the DOD Comptroller cited metrics that showed dramatic reductions in payment recording errors (57 percent between October 2000 and October 2001), backlogs of commercial payments (41 percent between April and October 2001), and travel card payment delinquencies (34 percent for those individually billed and 86 percent for those centrally billed between January and December 2001). As a result, the Congress asked us to determine whether the cited reductions were (1) calculated using consistent definitions and methodologies, (2) properly supported, and (3) effective indicators of short-term financial management progress. The DOD Comptroller's metrics showing significant reductions in payment recording errors and in commercial and travel card payment delinquencies were, in general, based on definitions and methodologies that were either consistent with or better than those used for prior reporting on these issues. Although the methodology used to calculate two of the cited measures resulted in overstating the rates of improvement, our recalculation after correcting for the methodology errors still showed positive--although less dramatic--improvement trends. While we were able to verify the reductions in travel card delinquencies because the underlying data were available from an independent source, we could not verify the accuracy of the specific improvement percentages reported for payment recording errors and commercial payment delinquencies. DOD's archaic and nonintegrated systems either do not contain the transaction-level detail to support the completeness and accuracy of the metrics or they make it extremely onerous and time consuming for the staff to gather and reconcile the needed detail. However, we were able to verify that DOD has made numerous policy, procedure, and systems changes that support an overall trend toward improved performance in these areas. If they could be verified, some of the cited metrics could be effective indicators of short-term financial management progress. However, if considered alone, delinquency rates are not necessarily good indicators for centrally billed travel cards or commercial payments. Placing too much emphasis on paying bills promptly may tempt DOD staff to bypass important internal controls meant to ensure that the goods and services being paid for were properly authorized and actually received. Despite shortcomings, the cited metrics have focused DOD's attention on highly visible financial management problems. As shown below, recent metrics issued by the DOD Comptroller indicate continuing improvements.
You are an expert at summarizing long articles. Proceed to summarize the following text: Mercury enters the environment in various ways, such as through volcanic activity, coal combustion, and chemical manufacturing. As a toxic element, mercury poses ecological threats when it enters water bodies, where small aquatic organisms convert it into its highly toxic form— methylmercury. This form of mercury may then migrate up the food chain as predator species consume the smaller organisms. Fish contaminated with methylmercury may pose health threats to people who rely on fish as part of their diet. Mercury can harm fetuses and cause neurological disorders in children, resulting in, among other things, impaired cognitive abilities. The Food and Drug Administration and EPA recommend that expectant or nursing mothers and young children avoid eating swordfish, king mackerel, shark, and tilefish and limit consumption of other potentially contaminated fish. These agencies also recommend checking local advisories about recreationally caught freshwater and saltwater fish. In recent years, most states have issued advisories informing the public that concentrations of mercury have been found in local fish at levels of public health concern. Coal-fired power plants burn at least one of three primary coal types— bituminous, subbituminous, and lignite—and some plants burn a blend of these coals. Of all coal burned by power plants in the United States in 2004, DOE estimates that about 46 percent was bituminous, 46 percent was subbituminous, and 8 percent was lignite. The amount of mercury in coal and the relative ease of its removal depend on a number of factors, including the geographic location where it was mined and the chemical variation within and among coal types. Coal combustion releases mercury in oxidized, elemental, or particulate-bound form. Oxidized mercury is more prevalent in the flue gas from bituminous coal combustion, and it is relatively easy to capture using some sulfur dioxide controls, such as wet scrubbers. Elemental mercury, more prevalent in the flue gas from combustion of lignite and subbituminous coal, is more difficult to capture with existing pollution controls. Particulate-bound mercury is relatively easy to capture in particulate matter control devices. In addition to mercury, coal combustion releases other harmful air pollutants, including sulfur dioxide and nitrogen oxides. EPA has regulated these pollutants since 1995 and 1996, respectively, through its program intended to control acid rain. Figure 1 shows various pollution controls that may be used at coal-fired power plants: selective catalytic reduction to control nitrogen oxides, wet or dry scrubbers to reduce sulfur dioxide, electrostatic precipitators and fabric filters to control particulate matter, and sorbent injection to reduce mercury emissions. From 2000 to 2009, DOE’s National Energy Technology Lab conducted field tests at operating power plants with different boiler configurations to develop mercury-specific control technologies capable of achieving high mercury emission reductions at the diverse fleet of U.S. coal-fired power plants. As a result, DOE now has comprehensive information on the effectiveness of sorbent injection systems using all coal types at a wide variety of boiler configurations. Most of these tests were designed to achieve mercury reductions of 50 to 70 percent while decreasing mercury reduction costs—primarily the cost of the sorbent. Thus, the results from the DOE test program may understate the mercury reductions that can be achieved by sorbent injection systems to some extent. For example, while a number of short-term tests achieved mercury reductions in excess of 90 percent, the amount of sorbent injection that achieved the reductions was often decreased during long-term tests to determine the minimum cost of achieving, on average, 70 percent mercury emission reductions. Under its mercury testing program, DOE initially tested the effectiveness of untreated carbon sorbents. On the basis of these results, we reported in 2005 that sorbent injection systems showed promising results but that they were not effective when used at boilers burning lignite and subbituminous coals. DOE went on to test the effectiveness of chemically treated sorbents—which can help convert the more difficult-to-capture mercury common in lignite and subbituminous coals to a more easily captured form—and achieved high mercury reduction across all coal types. Finally, DOE continued to test sorbent injection systems and to assess solutions to impacts on plant devices, structures, or operations that may result from operating these systems—called “balance-of-plant impacts.” In 2008, DOE reported that the high performance observed during many of its field tests at a variety of configurations has given coal-fired power pla nt operators the confidence to begin deploying these technologies. Bills have been introduced in the prior and current Congress addressing mercury emissions from power plants. The bills have proposed specific limits on mercury emissions, such as not less than 90 percent reductions, and some have specified time frames for EPA to promulgate a MACT regulation limiting mercury emissions from power plants. For example, a bill introduced in this Congress would require EPA to promulgate a MACT standard for mercury from coal-fired power plants within a year of the bill’s enactment. In addition, some bills introduced the past few years— termed multipollutant bills—would have regulated sulfur dioxide, nitrogen oxides, and carbon dioxide emissions, in addition to mercury, from coal- fired power plants. Most would have required a 90 percent reduction—or similarly stringent limit—of mercury emissions, with the compliance deadlines varying from 2011 to 2015. One such bill currently before Congress would prohibit existing coal-fired power plants from exceeding an emission limit of 0.6 pounds of mercury per trillion British thermal units (BTUs), a standard measure of the mercury content in coal— equivalent to approximately a 90 percent reduction—by January 2013. The managers of 14 coal-fired power plants reported to us they currently operate sorbent injection systems on 25 boilers to meet the mercury emission reduction requirements of 4 states and several consent decrees and construction permits. Preliminary data show that these boilers have achieved, on average, reductions in mercury emissions of about 90 percent. Of note, all 25 boilers currently operating sorbent injection systems have met or surpassed their relevant regulatory mercury requirements, according to plant managers. For example: A 164 megawatt bituminous-fired boiler, built in the 1960s and operating a cold-side electrostatic precipitator and wet scrubber, exceeds its 90 percent reduction requirement—achieving more than 95 percent mercury emission reductions using chemically treated carbon sorbent. A 400 megawatt subbituminous-fired boiler, built in the 1960s and operating a cold-side electrostatic precipitator and a fabric filter, achieves a 99 percent mercury reduction using untreated carbon sorbent, exceeding its 90 percent reduction regulatory requirement. A recently constructed 600 megawatt subbituminous-fired boiler operating a fabric filter, dry scrubber, and selective catalytic reduction system achieves an 85 percent mercury emission reduction using chemically treated carbon sorbent, exceeding its 83 percent reduction regulatory requirement. While mercury emissions reductions achieved with sorbent injection on a particular boiler configuration do not guarantee similar results at other boilers with the same configuration, the reductions achieved in deployments and tests provide important information for plant managers who must make decisions about pollution controls to reduce mercury emissions as more states’ mercury regulations become effective and as EPA develops its national mercury regulation. The sorbent injection systems currently used at power plants to reduce mercury emissions are operating on boiler configurations that are used at 57 percent of U.S. coal- fired power boilers. Further, when the results of 50 tests of sorbent injection systems at power plants conducted primarily as part of DOE’s or EPRI’s mercury control research and development programs are factored in, mercury reductions of at least 90 percent have been achieved at boiler configurations used at nearly three-fourths of coal-fired power boilers nationally. Some boiler configurations tested in the DOE program that are not yet included in commercial deployments follow: A 360 megawatt subbituminous-fired boiler with a fabric filter and a dry scrubber using a chemically treated carbon sorbent achieved a 93 percent mercury reduction. A 220 megawatt boiler burning lignite, equipped with a cold-side electrostatic precipitator, increased mercury reduction from 58 percent to 90 percent by changing from a combination of untreated carbon sorbent and a boiler additive to a chemically treated carbon sorbent. A 565 megawatt subbituminous-fired boiler with a fabric filter achieved mercury reductions ranging from 95 percent to 98 percent by varying the amount of chemically treated carbon sorbent injected into the system. As these examples of deployed and tested injection systems show, plants are using chemically treated sorbents and sorbent enhancement additives, as well as untreated sorbents. The DOE program initially used untreated sorbents, but during the past 6 years, the focus shifted to chemically treated sorbents and enhancement additives that were being developed. These more recent tests showed that using chemically treated sorbents and enhancement additives could achieve substantial mercury reductions for coal types that had not achieved these results in earlier tests with untreated sorbents. For example, injecting untreated sorbent reduced mercury by an average of 55 percent during a 2003 DOE test at a subbituminous-fired boiler. Recent tests using chemically treated sorbents and enhancement additives, however, have resulted in average mercury reductions of 90 percent for boilers using subbituminous coals. Similarly, recent tests on boilers using lignite reduced mercury emissions by roughly 80 percent, on average. The examples of substantial mercury reductions highlighted above also show that sorbent injection can be successful with both types of air pollution control devices that power plants use to reduce emissions of particulate matter. Specifically, regulated coal-fired power plants typically use either electrostatic precipitators or fabric filters for particulate matter control. The use of fabric filters—which are more effective at mercury emission reductions than electrostatic precipitators—at coal-fired power plants to reduce emissions of particulate matter and other pollutants is increasing, but currently less than 20 percent have them. Plant officials told us that they chose to install fabric filters along with 10 of the sorbent injection systems currently deployed to assist with mercury control—but that some of the fabric filters were installed primarily to comply with other air pollution control requirements. One plant manager, for example, told us that the fabric filter installed at the plant helps the sorbent injection system achieve higher levels of mercury emission reductions but that the driving force behind the fabric filter installation was to comply with particulate matter emission limits. Further, as another plant manager noted, fabric filters may provide additional benefits by limiting emissions of acid gases and trace metals, as well as by preserving fly ash—fine powder resulting from coal combustion—for sale for reuse. The successful deployments of sorbent injection technologies at power plants occurred around the time DOE concluded, on the basis of its tests, that these technologies were ready for commercial deployment. Funding for the DOE testing program has been eliminated. Regarding deployments to meet state requirements that will become effective in the near future, the Institute of Clean Air Companies reported that power plants had 121 sorbent injection systems on order as of February 2009. Importantly, mercury control technologies will not have to be installed on a number of coal-fired boilers to meet mercury emission reduction requirements because they already achieve high mercury reductions from their existing pollution control devices. EPA data indicate that about one- fourth of the industry may be currently achieving mercury reductions of 90 percent or more as a co-benefit of other pollution control devices. We found that of the 36 boilers currently subject to mercury regulation, 11 are relying on existing pollution controls to meet their mercury reduction requirements. One plant manager told us their plant achieves 95 percent mercury reduction with a fabric filter for particulate matter control, a scrubber for sulfur dioxide control, and a selective catalytic reduction system for nitrogen oxides control. Other plants may also be able to achieve high mercury reduction with their existing pollution control devices. For example, according to EPA data, a bituminous-fired boiler with a fabric filter may reduce mercury emissions by more than 90 percent. While sorbent injection technology has been shown to be effective with all coal types and on boiler configurations at more than three-fourths of U.S. coal-fired power plants, DOE tests show that some plants may not be able to achieve mercury reductions of 90 percent or more with sorbent injection systems alone. For example: Sulfur trioxide—which can form under certain operating conditions or from using high sulfur bituminous coal—may limit mercury reductions because it prevents mercury from binding to carbon sorbents. Hot-side electrostatic precipitators reduce the effectiveness of sorbent injection systems. Installed on 6 percent of boilers nationwide, these particulate matter control devices operate at very high temperatures, which reduces the ability of mercury to bind to sorbents and be collected in the devices. Lignite, used by roughly 3 percent of boilers nationwide, has relatively high levels of elemental mercury—the most difficult form to capture. Lignite is found primarily in North Dakota and the Gulf Coast, the latter called Texas lignite. Mercury reduction using chemically treated sorbents and sorbent enhancement additives on North Dakota lignite has averaged about 75 percent—less than reductions using bituminous and subbituminous coals. Less is known about Texas lignite because few tests have been performed using it. However, a recent test at a plant burning Texas lignite achieved an 83 percent mercury reduction. Boilers that may not be able to achieve 90 percent emissions reductions with sorbent injection alone, and some promising solutions to the challenges they pose, are discussed in appendix I. Further, EPRI is continuing research on mercury controls at power plants that should help to address these challenges. In some cases, however, plants may need to pursue a strategy other than sorbent injection to achieve high mercury reductions. For example, officials at one plant decided to install a sulfur dioxide scrubber— designed to reduce both mercury and sulfur dioxide—after sorbent injection was found to be ineffective. This approach may become more typical as power plants comply with the Clean Air Interstate Rule and court-ordered revisions to it, which EPA is currently developing, and as some plants add air pollution control technologies required under consent decrees. EPA air strategies group officials told us that many power plants will be installing devices—fabric filters, scrubbers, and selective catalytic reduction systems—that are typically associated with high levels of mercury reduction, which will likely reduce the number of plants requiring alternative strategies for mercury control. Finally, mercury controls have been tested on about 90 percent of the boiler configurations at coal-fired power plants. The remaining 10 percent include several with devices, such as selective catalytic reduction devices for nitrogen oxides control and wet scrubbers for sulfur dioxide control, which are often associated with high levels of mercury emission reductions. The cost to meet current regulatory requirements for mercury reductions has varied depending in large part on decisions regarding compliance with other pollution reduction requirements. For example, while sorbent injection systems alone have been installed on most boilers that must meet mercury reduction requirements—at a fraction of the cost of other pollution control devices—fabric filters have also been installed on some boilers to assist in mercury capture or to comply with particulate matter requirements, according to plant officials we interviewed. The costs of purchasing and installing sorbent injection systems and monitoring equipment have averaged about $3.6 million for the 14 coal- fired boilers that use sorbent injection systems alone to reduce mercury emissions (see table 1). For these boilers, the cost ranged from $1.2 to $6.2 million. By comparison, on the basis of EPA estimates, the average cost to purchase and install a wet scrubber for sulfur dioxide control, absent monitoring system costs, is $86.4 million per boiler—the estimates range from $32.6 to $137.1 million. EPA’s estimate of the average cost to purchase and install a selective catalytic reduction device to control nitrogen oxides is $66.1 million, ranging from $12.7 to $127.1 million. Capital costs can increase significantly if fabric filters are also purchased to assist in mercury emission reductions or as part of broader emission reduction requirements. For example, plants installed fabric filters at another 10 boilers for these purposes. On the five boilers where plant officials reported also installing a fabric filter specifically designed to assist the sorbent injection system in mercury emission reductions, the average reported capital cost for both the sorbent injection system and fabric filter was $15.8 million per boiler—the costs ranged from $12.7 million to $24.5 million. Importantly, these boilers have uncommon configurations—ones that, as discussed earlier, DOE tests showed would need additional control devices to achieve high mercury reductions. Table 1 shows the per-boiler capital costs of sorbent injections systems depending on whether fabric filters are also installed primarily to reduce mercury emissions. For the five boilers where plant officials reported installing fabric filters along with sorbent injection systems largely to comply with requirements to control other forms of air pollution, the average reported capital cost for both the sorbent injection system and fabric filter was $105.9 million per boiler, ranging from $38.2 million to $156.2 million per boiler. We did not determine what portion of these costs would appropriately be allocated to the cost of reducing mercury emissions. Decisions to purchase such fabric filters will likely be driven by the broader regulatory landscape affecting plants in the near future, such as requirements for particulate matter, sulfur dioxide, and nitrogen oxides reductions, as well as EPA’s upcoming MACT regulation for coal-fired power plants that, according to EPA officials, will regulate mercury as well as other air toxics emitted from these plants. Regarding operating costs, plant managers said that annual operating costs associated with sorbent injection systems consist almost entirely of the cost of the sorbent itself. In operating sorbent injection systems, sorbent is injected continuously into the boiler exhaust gas to bind to mercury passing through the gas. The rate of injection is related to, among other things, the level of mercury emission reduction required to meet regulatory requirements and to the amount of mercury in the coal used. For the 18 boilers with sorbent injection systems for which power plants provided sorbent cost data, the average annualized cost of sorbent was $674,000. Plant engineers often adjust the injection rate of the sorbent to capture more or less mercury—the more sorbent in the exhaust gas, for example, the higher the likelihood that more mercury will bind to it. Some plant managers told us that they have recently been able to decrease their sorbent injection rates, thereby reducing costs, while still complying with relevant requirements. Specifically, a recently constructed plant burning subbituminous coal successfully used sorbent enhancement additives to considerably reduce its rate of sorbent injection—resulting in significant savings in operating costs when compared with its original expectations. Plant managers at other plants reported that they have injected sorbent at relatively higher rates because of regulatory requirements that mandate a specific injection rate. One state’s consent decree, for example, requires plants to operate their sorbent injection systems at an injection rate of 5 pounds per million actual cubic feet. Among the 19 boilers for which plant managers provided operating data, the average injection rate was 4 pounds per million actual cubic feet; rates ranged from 0.5 to 11.0 pounds per million actual cubic feet. For those plants that installed a sorbent injection system alone—at an average cost of $3.6 million—to meet mercury emissions requirements, the cost to purchase, install, and operate sorbent injection and monitoring systems represents 0.12 cents per kilowatt hour, or a potential 97 cent increase in the average residential consumer’s monthly electricity bill. How, when, and to what extent consumers’ electric bills will reflect the capital and operating costs power companies incur for mercury controls depends in large measure on market conditions and the regulatory framework in which the plants operate. Power companies in the United States are generally divided into two broad categories: (1) those that operate in traditionally regulated jurisdictions where cost-based rate setting still applies (rate-regulated) and (2) those that operate in jurisdictions where companies compete to sell electricity at prices that are largely determined by supply and demand (deregulated). Rate-regulated power companies are generally allowed by regulators to set rates that will recover allowable costs, including a return on invested capital. Minnesota, for example, passed a law in 2006 allowing power companies to seek regulatory approval for recovering the cost of anticipated state- required reductions in mercury emissions in advance of the regulatory schedule for rate increase requests. One utility in the state submitted a plan for the installation of sorbent injection systems to reduce mercury emissions at two of its plants at a cost of $4.4 and $4.5, respectively, estimating a rate increase of 6 to 10 cents per month for customers of both plants. For power companies operating in competitive markets where wholesale electricity prices are not regulated, prices are largely determined by supply and demand. Generally speaking, market pricing does not guarantee full cost recovery to suppliers, especially in the short run. Of the 25 boilers using sorbent injection systems to comply with a requirement to control mercury emissions, 21 are in jurisdictions where full cost recovery is not guaranteed through regulated rates. In addition to the costs discussed above, some plant managers told us they have incurred costs associated with balance-of-plant impacts. The issue of particular concern relates to fly ash—fine particulate ash resulting from coal combustion that some power plants sell for commercial uses, including concrete production, or donate for beneficial purposes, such as backfill. According to DOE, about 30 percent of the fly ash generated by coal-fired power plants was sold in 2005; 216 plants sold some portion of their fly ash. Most sorbents increase the carbon content of fly ash, which may render it unsuitable for some commercial uses. Specifically, some plant managers told us that they have incurred additional costs because of lost fly ash sales and additional costs to store fly ash that was previously either sold or donated for beneficial re-use. For the eight boilers with installed sorbent injection systems to meet mercury emissions requirements for which plants reported actual or estimated fly-ash related costs, the average net cost reported by plants was $1.1 million per year. Advances in sorbent technologies that have reduced costs at some plants also offer the potential to preserve the market value of fly ash. For example, at least one manufacturer offers a concrete-friendly sorbent to help preserve fly ash sales—thus reducing potential fly ash storage and disposal costs. Additionally, a recently constructed plant burning subbituminous coal reported that it had successfully used sorbent enhancement additives to reduce its rate of sorbent injection from 2 pounds to less than one-half pound per million actual cubic feet—resulting in significant savings in operating costs and enabling it to preserve the quality of its fly ash for reuse. Other potential advances include refining sorbents through milling and changing the sorbent injection sites. Specifically, in testing, milling of sorbents has, for some configurations, improved their efficiency in reducing mercury emissions—that is, reduced the amount of sorbent needed—and also helped minimize negative impact on fly ash re-use. Also, in testing, some vendors have found that injecting sorbents on the hot side of air preheaters can decrease the amount of sorbent needed to achieve desired levels of mercury control. Some plant managers reported other balance-of-plant impacts associated with sorbent injection systems, such as ductwork corrosion and small fires in the particulate matter control devices. Plant engineers told us these issues were generally minor and have been resolved. For example, two plants experienced corrosion in the ductwork following the installation of their sorbent injection systems. One plant manager resolved the problem by purchasing replacement parts at a cost of $4,500. The other plant manager told us the corrosion problem remains unresolved but that it is primarily a minor engineering challenge not impacting plant operations. Four plant managers reported fires in the particulate matter control devices; plant engineers have generally solved this problem by emptying the ash from the collection devices more frequently. Overall, despite minor balance-of-plant impacts, most plant managers said that the sorbent injection systems at their plants are more effective than they originally expected. EPA’s decisions on key regulatory issues will impact the overall stringency of its mercury emissions limit. Specifically, the data EPA decides to use will affect (1) the mercury emission reductions calculated for “best performers,” from which a proposed emission limit is derived, (2) whether EPA will establish varying standards for the three coal types, and (3) how EPA’s standard will take into account varying operating conditions. Each of these issues could affect the stringency of the MACT standard the agency proposes. In addition, the format of the standard—whether it limits the mercury content of coal being burned (an input standard) or of emissions from the stack (an output standard)—may affect the stringency of the MACT standard the agency proposes. Finally, the vacatur of the Clean Air Mercury Rule has delayed for a number of years the continuous emissions monitoring that would have started in 2009 at most coal-fired power plants. Consequently, data on mercury emissions from coal-fired power plants and the resolution of some technical issues with monitoring systems have both been delayed. Obtaining data on mercury emissions and identifying the “best performers”—defined as the 12 percent of coal-fired power plant boilers with the lowest mercury emissions—is a critical initial step in the development of a MACT standard for mercury. EPA may set one standard for all power plants, or it may establish subcategories to distinguish among classes, types, and sizes of plants. For example, in its 2004 proposed mercury MACT, EPA established subcategories for the types of coal most commonly used by power plants. Once the average mercury emissions of the best performers are established for power plants—or for subcategories of power plants—EPA accounts for variability in the emissions of the best performers in its MACT standard(s). EPA’s method for accounting for variability has generally resulted in MACT standards that are less stringent than the average emission reductions achieved by the best performers. To identify the best performers, EPA typically collects emissions data from a sample of plants representative of the U.S. coal-fired power industry through a process known as an information collection request. Information collection requests are required when an agency collects data from 10 or more nongovernmental parties. According to EPA officials, this data collection process, which requires Office of Management and Budget (OMB) review and approval, typically takes from 8 months to 1 year. EPA’s schedule for issuing a proposed rule and a final rule has not yet been established as the agency is currently in negotiations with litigants about these time frames. In developing the rule, EPA told us it could decide to use data from its 1999 information collection request, data from commercial deployments and DOE tests to augment its 1999 data, or implement a new information collection request for mercury emissions. On July 2, 2009, EPA published a draft information collection request in the Federal Register, providing a 60-day public comment period on the draft questionnaire to industry prior to submitting this information collection request to OMB for review and approval. Our analysis of EPA’s 1999 data, as well as more current data from deployments and DOE tests, shows that newer data may have several implications for the stringency of the standard. First, the average emissions of the best performers, from which the standard is derived, may be higher. Our analysis of EPA’s 1999 data shows an average mercury emission reduction of nearly 91 percent for the best performers. In contrast, using more current commercial deployment and DOE test data, as well as data on co-benefit mercury reductions collected in 1999, an average mercury emission reduction of nearly 96 percent for best performers is demonstrated. The 1999 data do not reflect the significant and widespread mercury reductions achieved by sorbent injection systems. Further, EPA’s 2004 proposed MACT standards for mercury were substantially lower than the 1999 average emission reduction of the best performers because of variability in mercury emissions among the top performers, as discussed in more detail below. Second, more current information that reflects mercury control deployments and DOE tests may make the rationale EPA used to create MACT standards for different subcategories less compelling to the agency now. In its 2004 proposed MACT, using 1999 data, EPA proposed separate standards for three subcategories of coal used at power plants, largely because the co-benefit capture of mercury from subbituminous- and lignite-fired boilers was substantially less than from bituminous-fired boilers and resulted in higher average mercury emissions for best performers using these coal types. Specifically, the 1999 data EPA used for its 2004 MACT proposal showed that best performers achieved average emission reductions of 97 percent for bituminous, 71 percent for subbituminous, and 45 percent for lignite. In contrast, more current data show that using sorbent injection systems with all coal types has achieved at least 90 percent mercury emission reductions in most cases. Finally, using more current emissions data in setting the mercury standard, may mean that accounting for variability in emissions will not have as significant an effect as it did in the 2004 proposed MACT—thereby lowering the MACT standard—because the current data already reflect variability. In its 2004 proposed MACT, EPA explained that its 1999 data, obtained from the average of short-term tests (three samples taken over a 1- to 2-day period), did not necessarily reveal the range of emissions that would be found over extended periods of time or under a full range of operating conditions they could reasonably anticipate. EPA thus extrapolated longer-term variability data from the short-term data, and on the basis of these calculations, proposed MACT standards equivalent to a 76 percent reduction in mercury emissions for bituminous coal, a 25 percent reduction for lignite, and a 5 percent reduction for subbituminous coal—20 to 66 percentage points lower than the average of what the best performers achieved for each coal type. However, current data may eliminate the need for such extrapolation. Data from commercial applications of sorbent injection systems, DOE field tests, and co-benefit mercury reductions show that mercury reductions well in excess of 90 percent have been achieved over periods ranging from more than 30 days in field tests to more than a year in commercial applications. Mercury emissions measured over these periods may more accurately reflect the variability in mercury emissions that plants would encounter over the range of operating conditions. Along these lines, at least 15 states with mercury emission limits require long-term averaging— ranging from 1 month to 1 year—to account for variability. According to the manager of a power plant operating a sorbent injection system, long- term averaging of mercury emissions takes into account the “dramatic swings” in mercury emissions from coal that may occur. He told us that while mercury emissions can vary on a day-to-day basis, this plant has achieved 94 percent mercury reduction, on average, over the last year. Similarly, another manager of a power plant operating a sorbent injection system told us the amount of mercury in the coal they use “varies widely, even from the same mine.” Nonetheless, the plant manager reported that this plant achieves its required 85 percent mercury reduction because the state allows averaging mercury emissions on a monthly basis to take into account the natural variability of mercury in the coal. In 2004, EPA’s proposed mercury MACT included two types of standards to limit mercury emissions: (1) an output-based standard for new coal- fired power plants and (2) a choice between an input- or output-based standard for existing plants. Input-based standards establish emission limits on the basis of pounds of mercury per trillion British thermal units (BTUs) of heat input; output-based standards, on the other hand, establish emission limits on the basis of pounds of mercury per megawatt hour of electricity produced. These standards are referred to as absolute limits. For the purposes of setting a standard, absolute emissions limits can be correlated to percent reductions. For example, EPA’s 2004 proposed standards for bituminous, lignite, and subbituminous coal (2, 9.2, and 5.8 pounds per trillion BTUs, respectively) are equivalent with mercury emissions reductions of 76, 25, and 5 percent, respectively, based on nationwide averages of the mercury content in coal. During EPA’s 2004 MACT development process, state and local agency stakeholders, as well as environmental stakeholders, generally supported output-based emission limits; industry stakeholders generally supported having a choice between an emission limit and a percent reduction. EPA must now decide in what format it will set its mercury MACT standard(s). Input-based limits can have some advantages for coal-fired power plants. For example, input-based limits can provide more flexibility to older, less efficient plants because they allow boilers to burn as much coal as needed to produce a given amount of electricity, as long as the amount of mercury per trillion BTUs does not exceed the level specified by the standard. However, input-based limits may allow some power plants to emit more mercury per megawatt hour than output-based limits. Under an output- based standard, mercury emissions cannot exceed a specific level per megawatt-hour of electricity produced—efficient boilers, which use less coal, will be able to produce more electricity than inefficient boilers under an output-based standard. Moreover, under an output-based limit, less efficient boilers may have to, for example, increase boiler efficiency or switch to a lower mercury coal. Thus, output-based limits provide a regulatory incentive to enhance both operating efficiency and mercury emission reductions. We found that at least 16 states have established a format for regulating mercury emissions from coal-fired power plants. Eight states allow plants to meet either an emission limit or a percent reduction, three require an emission limit, four require percent reductions, and one state requires plants to achieve whatever mercury emissions reductions—percent reduction or emission limit—are greater. On the basis of our review of these varying regulatory formats, we conclude that to be meaningful, a standard specifying a percent reduction should be correlated to an absolute limit. When used alone, percent reduction standards can limit mercury emissions reductions. For example, in one state, mercury reductions are measured against “historical” coal-mercury content data, rather than current coal-mercury content data. If plants are required to reduce mercury by, for example, 90 percent compared to historical coal data, but coal used in the past had higher levels of mercury than the plants have been using more recently, then actual mercury emission reductions would be less than 90 percent. In addition, percent reduction requirements do not provide an incentive for plants burning high mercury coal to switch coals or pursue more effective mercury control strategies because it is easier to achieve a percent reduction requirement with high mercury coal than with lower mercury coals. Similarly, a combination standard that gives regulated entities the option to choose either a specified emission limit or a percent reduction might limit actual mercury emission reductions. For example, a plant burning coal with a mercury content of 15 pounds per trillion BTUs that may choose between meeting an absolute limit of 0.7 pounds of mercury per trillion BTUs or a 90 percent reduction could achieve the percent reduction while emitting twice the mercury that would be allowed under the specified absolute limit. As discussed above, for the purposes of setting a standard, a required absolute limit, which provides a consistent benchmark for plants to meet, can be correlated to a percent reduction. For example, according to EPA’s Utility Air Toxic MACT working group, a 90 percent mercury reduction based on national averages of mercury in coal equates to an emission limit of approximately 0.7 pounds per trillion BTUs. For bituminous coal, a 90 percent reduction equates to a limit of 0.8 pounds per trillion BTUs; for subbituminous coal, a 90 percent reduction equates to a limit of 0.6 pounds per trillion BTUs; and for lignite, a 90 percent reduction equates to a limit of 1.2 pounds per trillion BTUs. EPA’s now-vacated Clean Air Mercury Rule required most coal-fired power plants to conduct continuous emissions monitoring for mercury—and a small percentage of plants with low mercury emissions to conduct periodic testing—beginning in 2009. State and federal government and nongovernmental organization stakeholders told us they support reinstating the monitoring requirements of the Clean Air Mercury Rule. In fact, in a June 2, 2008, letter to EPA, the National Association of Clean Air Agencies requested that EPA reinstate the mercury monitoring provisions that were vacated in February 2008 because, among other things, the monitoring requirements are important to state agencies with mercury reduction requirements. This association for state clean air agencies also said the need for federal continuous emissions monitoring requirements is especially important in states that cannot adopt air quality regulations more stringent than those of the federal government. However, EPA officials told us the agency has not determined how to reinstate continuous emissions monitoring requirements for mercury at coal-fired power plants outside of the MACT rulemaking process. As a result, continuous monitoring of mercury emissions from coal-fired power plants may continue to be delayed for years. Under the Clean Air Mercury Rule, the selected monitoring methodology for each power plant was to be approved by EPA through a certification process. For its part, EPA was to develop a continuous emissions monitoring systems (CEMS) certification process and approve protocols for quality control and assurance. However, when the Clean Air Mercury Rule was vacated, EPA put its CEMS certification process on hold. Effective emissions monitoring assists facilities and regulators in ensuring compliance with regulations and can also help facilities identify ways to better understand the efficiency of their processes and the efficiency of their operations. Monitoring mercury emissions is more complex than monitoring other pollutants, such as nitrogen oxides and sulfur dioxide, which are measured in parts per million. Mercury, for example, is emitted at lower levels of concentration than other pollutants and is measured in parts per billion—it is like “trying to find a needle in a haystack,” according to one plant engineer. Consequently, mercury CEMS require more time to install and setup than CEMS for other pollutants, and, according to plant engineers using them, they involve a steeper learning curve in getting these relatively complex monitoring systems up and running properly. EPA plans to release interim quality control protocols for mercury CEMS in July 2009. In our work, we found that these systems are installed on 16 boilers at power plants for monitoring operations or for compliance reporting. Our preliminary data shows that for regulated coal-fired boilers, plant managers reported that their mercury CEMS were online from 62 percent to 99 percent of the time. When these systems were offline, it was mainly because of failed system integrity checks or routine parts failure. Some plant engineers told us that CEMS are accurate at measuring mercury, but others said that these systems are “several years away” from commercial readiness. However, according to an EPA Clean Air Markets Division official, while some technical monitoring issues remain, mercury CEMS are sufficiently reliable to determine whether plants are complying with their relevant state mercury emissions regulations. Data from commercially deployed sorbent injection systems show that substantial mercury reductions have been achieved at a relatively low cost. Importantly, these results, along with test results from DOE’s comprehensive research and development program, suggest that substantial mercury emission reductions can likely be achieved at most coal-fired power plants in the United States. Other strategies, including blending coal and using other technologies, exist for the small number of plants with configuration types that were not able to achieve significant mercury emissions reductions with sorbent injection alone. Whether power plants will install sorbent injection systems or pursue multipollutant control strategies will likely be driven by the broader regulatory context in which they operate, such as requirements for sulfur dioxide and nitrogen oxides reductions in addition to mercury, and the associated costs to comply with all pollution reduction requirements. Nonetheless, for many plants, sorbent injection systems appear to be a cost-effective technology for reducing mercury emissions. For other plants, sorbent injection may represent a relatively inexpensive bridging technology—that is, one that is available for immediate use to reduce only mercury emissions but that may be phased out—over time—with the addition of multipollutant controls, which are more costly. Moreover, some plants emit small amounts of mercury without mercury-specific controls because their existing controls for other air pollutants also effectively reduce mercury emissions. In fact, while many power companies currently subject to mercury regulation have installed sorbent injection systems to achieve required reductions, about one-third of them are relying on existing pollution control devices to meet the requirements. As EPA proceeds with its rulemaking process to regulate hazardous air pollutants from coal-fired power plants, including mercury, it will likely find that current data on commercially deployed sorbent injection systems and plants that achieve high mercury reductions from their existing pollution control devices justify a more stringent mercury emission standard than was last proposed in 2004. More significant mercury emission reductions are actually being achieved by the current best performers than was the case in 1999 when such information was last collected—and similar results can likely be achieved by most plants across the country at relatively low cost. Mr. Chairman, this concludes my prepared statement. We expect to complete our ongoing work by October 2009. I would be happy to respond to any questions that you or other Members of the Subcommittee may have at this time. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact me at (202) 512-3841 or [email protected]. Key contributors to this statement were Christine Fishkin (Assistant Director), Nathan Anderson, Mark Braza, Antoinette Capaccio, Nancy Crothers, Philip Farah, Mick Ray, and Katy Trenholme. DOE tests show that some plants may not be able to achieve mercury reductions of 90 percent or more with sorbent injections alone. Specifically, the tests identified three factors that can impact the effectiveness of sorbent injection systems: sulfur trioxide interference, using hot-side precipitators, and using lignite. These factors are discussed below, along with some promising solutions to the challenges they pose. Sulfur trioxide interference. High levels of sulfur trioxide gas may limit mercury emission reductions by preventing some mercury from binding to carbon sorbents. Using an alkali injection system in conjunction with sorbent injection can effectively lessen sulfur trioxide interference. Depending on the cause of the sulfur trioxide interference—which can stem from using a flue gas conditioning system, a selective catalytic reduction system, or high sulfur bituminous coal—additional strategies may be available to ensure high mercury reductions: Flue gas conditioning systems, used on 13 percent of boilers nationwide, improve the performance of electrostatic precipitators by injecting a conditioning agent, typically sulfur trioxide, into the flue gas to make the gas more conducive to capture in electrostatic precipitators. Mercury control vendors are working to develop alternative conditioning agents that could be used instead of sulfur trioxide in the conditioning system to improve the performance of electrostatic precipitators without jeopardizing mercury emission reductions using sorbent injection. Selective catalytic reduction systems, a common control device for nitrogen oxides, are used by about 20 percent of boilers nationwide. Although selective catalytic reduction systems often improve mercury capture, in some instances these devices may lead to sulfur trioxide interference when sulfur in the coal is converted to sulfur trioxide gas. Newer selective catalytic reduction systems often have improved catalytic controls, which can minimize the conversion of sulfur to sulfur trioxide gas. High sulfur bituminous coal—defined as having a sulfur content of at least 1.7 percent sulfur by weight—may also lead to sulfur trioxide interference in some cases. As many as 20 percent of boilers nationwide may use high sulfur coal, according to 2005 DOE data; however, the number of coal boilers using high sulfur bituminous coal is likely to decline in the future as more stringent sulfur dioxide regulations take effect. Plants can consider using alkali-based sorbents, such as Trona, which adsorb sulfur trioxide gas before it can interfere with the performance of sorbent injection systems. Plants that burn high sulfur coal can also consider blending their fuel to include some portion of low sulfur coal. In addition, according to EPA, power companies are likely to have or to install scrubbers for controlling sulfur dioxide at plants burning high sulfur coal and are more likely to use the scrubbers, rather than sorbent injection systems, to also reduce mercury emissions. Hot-side electrostatic precipitators. Installed on 6 percent of boilers nationwide, these particulate matter control devices operate at very high temperatures, which reduce the incidence of mercury binding to sorbents for collection in particulate matter control devices. However, at least two promising techniques have been identified in tests and commercial deployments at configuration types with hot-side electrostatic precipitators. First, 70 percent mercury emission reductions were achieved with specialized heat-resistant sorbents during DOE testing. Moreover, one of the 25 boilers currently using a sorbent injection system has a hot-side electrostatic precipitator and uses a heat-resistant sorbent. Although plant officials are not currently measuring mercury emissions for this boiler, the plant will soon be required to achieve mercury emission reductions equivalent to 90 percent. Second, in another DOE test, three 90 megawatt boilers—each with a hot-side electrostatic precipitator— achieved more than 90 percent mercury emission reductions by installing a shared fabric filter in addition to a sorbent injection system, a system called TOXECONTM. According to plant officials, these three units currently use this system to comply with a consent decree and achieved 94 percent mercury emission reductions during the third quarter of 2008, the most recent compliance reporting period when the boiler was operating under normal conditions. Lignite. North Dakota and Texas lignite, the fuel source for roughly 3 percent of boilers nationwide, have relatively high levels of elemental mercury—the most difficult form to capture. Overall, tests on boilers using lignite reduced mercury emissions by roughly 80 percent, on average. For example, four long-term DOE tests were conducted at coal units burning North Dakota lignite using chemically-treated sorbents. Mercury emission reductions averaged 75 percent across the tests. The best result was achieved at a 450 megawatt boiler burning North Dakota lignite and having a fabric filter and a dry scrubber—mercury reductions of 92 percent were achieved when chemically-treated sorbents were used. In addition, two long-term tests were conducted at plants burning Texas lignite with a 30 percent blend of subbituminous coal. With coal blending, these boilers achieved average mercury emission reductions of 82 percent. Specifically, one boiler, with an electrostatic precipitator and a wet scrubber, achieved mercury reductions in excess of 90 percent when burning the blended fuel. The second boiler achieved 74 percent reduction in long-term testing. However, 90 percent was achieved in short term tests using a higher sorbent injection rate. Although DOE conducted no tests on plants burning purely Texas lignite, one power company is currently conducting sorbent injection tests at a plant burning 100 percent Texas lignite and is achieving promising results. In the most recent round of testing, this boiler achieved mercury removal of 83 percent using untreated carbon and a boiler additive in conjunction with the existing electrostatic precipitator and wet scrubber. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The 491 U.S. coal-fired power plants are the largest unregulated industrial source of mercury emissions nationwide, annually emitting about 48 tons of mercury--a toxic element that poses health threats, including neurological disorders in children. In 2000, the Environmental Protection Agency (EPA) determined that mercury emissions from these sources should be regulated, but the agency has not set a maximum achievable control technology (MACT) standard, as the Clean Air Act requires. Some power plants, however, must reduce mercury emissions to comply with state laws or consent decrees. After managing a long-term mercury control research and development program, the Department of Energy (DOE) reported in 2008 that systems that inject sorbents--powdery substances to which mercury binds--into the exhaust from boilers of coal-fired power plants were ready for commercial deployment. Tests of sorbent injection systems, the most mature mercury control technology, were conducted on a variety of coal types and boiler configurations--that is, on boilers using different air pollution control devices. This testimony provides preliminary data from GAO's ongoing work on (1) reductions achieved by mercury control technologies and the extent of their use at coal-fired power plants, (2) the cost of mercury control technologies in use at these plants, and (3) key issues EPA faces in regulating mercury emissions from power plants. GAO obtained data from power plants operating sorbent injection systems. Commercial deployments and 50 DOE and industry tests of sorbent injection systems have achieved, on average, 90 percent reductions in mercury emissions. These systems are being used on 25 boilers at 14 coal-fired plants, enabling them to meet state or other mercury emission requirements--generally 80 to 90 percent reductions. The effectiveness of sorbent injection is largely affected by coal type and boiler configuration. Importantly, the substantial mercury reductions using these systems commercially and in tests were achieved with all three main types of coal and on boiler configurations that exist at nearly three-fourths of U.S. coal-fired power plants. While sorbent injection has been shown to be widely effective, DOE tests suggest that other strategies, such as blending coals or using other technologies, may be needed to achieve substantial reductions at some plants. Finally, sorbent injection has not been tested on a small number of boiler configurations, some of which achieve high mercury removal with other pollution control devices. The cost of the mercury control technologies in use at power plants has varied, depending in large part on decisions regarding compliance with other pollution reduction requirements. The costs of purchasing and installing sorbent injection systems and monitoring equipment have averaged about $3.6 million for the 14 coal-fired boilers operating sorbent systems alone to meet state requirements. This cost is a fraction of the cost of other pollution control devices. When plants also installed a fabric filter device primarily to assist the sorbent injection system in mercury reduction, the average cost of $16 million is still relatively low compared with that of other air pollution control devices. Annual operating costs of sorbent injection systems, which often consist almost entirely of the cost of the sorbent itself, have been, on average, about $640,000. In addition, some plants have incurred other costs, primarily due to lost sales of a coal combustion byproduct--fly ash--that plants have sold for commercial use. The carbon in sorbents can render fly ash unusable for certain purposes. Advances in sorbent technologies that have reduced sorbent costs at some plants offer the potential to preserve the market value of fly ash. EPA's decisions on key regulatory issues will have implications for the effectiveness of its mercury emissions standard. For example, the data EPA decides to use will impact (1) the emissions reductions it starts with in developing its regulation, (2) whether it will establish varying standards for the three main coal types, and (3) how the standard will take into account a full range of operating conditions at the plants. These issues can affect the stringency of the MACT standard EPA proposes. Data from EPA's 1999 power plant survey do not reflect commercial deployments or DOE tests of sorbent injection systems and could support a standard well below what has recently been broadly achieved. Moreover, the time frame for proposing the standard may be compressed because of a pending lawsuit. On July 2, 2009, EPA announced that it planned to conduct an information collection request to update existing emission data, among other things, from power plants.
You are an expert at summarizing long articles. Proceed to summarize the following text: Since the early 1980s, NWS has been modernizing its observing, information processing, and communications systems to improve the accuracy, timeliness, and efficiency of weather forecasts and warnings. The $4.5 billion modernization includes four major systems—AWIPS, the Next Generation Geostationary Operational Environmental Satellites (GOES-Next), the Next Generation Weather Radars (NEXRAD), and the Automated Surface Observing System (ASOS). GOES-Next, NEXRAD, and ASOS form the foundation of NWS’ observing infrastructure, watching the weather from earth and space. The two orbiting GOES-Next satellites each provide as many as 1,200 digital weather images daily to each forecast office, compared to less than 100 from their predecessors. The almost 150 NEXRADs are to blanket the continental United States and using Doppler radar technology, allow forecasters to see inside weather events to detect motion and dynamics that were invisible to pre-NEXRAD radars. ASOS, which consists of nearly 900 separate ground-based sensor sets, is to provide a portfolio of weather readings, such as temperature and visibility, from more locations and with greater frequency than has been done by human observers. AWIPS is to function as the “central nervous system” of a modernized NWS. That is, it is to be the information processing and display system that forecasters use to integrate, analyze, and graphically view the immense number of weather observations and products that form the basis for each day’s weather and river forecasts and warnings. It is also to be the national communications infrastructure for NWS’ many forecasting offices and centers, connecting them not only to each other but also linking them to the many users of forecasts and warnings throughout the nation. Using AWIPS’ advanced processing, display, and communications capabilities, NWS expects to fully capitalize on its new observing systems for the first time. Without AWIPS, these observing systems cannot be maximized because the current Automation of Field Operations and Services (AFOS) computer and communications system and the associated weather office display systems cannot accommodate the mountainous data streams that these observing systems now provide. For example, the current system can only accept two satellite images an hour. With AWIPS, GOES-Next images are to be received in real time, meaning that up to eight images can be received and displayed each hour. According to NWS, AWIPS is to contribute to improvements in the accuracy and timeliness of forecasts and warnings as well as streamlining its operations and downsizing its organization. All told, NWS expects that the combined pieces of the modernization will result in the number of NWS field offices dropping from 254 to 119 and the number of NWS staff falling from 5,100 to 4,678. AWIPS is expected to improve forecaster productivity by allowing forecasters to view disparate data sets in an integrated fashion, perform an assortment of scientific computations on these data sets, and graphically display and interact with these data sets. Currently, such activities are generally performed manually as data from the various observing systems are displayed on multiple screens and assimilated by the forecaster. For example, today, when forecasters want to combine radar and satellite images to view weather pattern movements, the images must be manually overlayed on transparencies. With AWIPS, such integration is expected to occur automatically on the AWIPS workstation with a few simple mouse clicks or keystroke commands. In short, AWIPS is to result in forecasters spending less time physically and mentally manipulating data and more time practicing meteorology. The National Oceanic and Atmospheric Administration (NOAA), which is NWS’ parent agency, began AWIPS in the mid-1980s to replace AFOS. Since then, NWS has invested considerable time and effort in analyzing and defining AWIPS’ requirements, effectively involving users, as we reported in 1993, through hands-on experience with prototypes. In 1992, NOAA awarded the AWIPS development contract to the Planning Research Corporation (PRC). Because of the contractor’s failure to deliver an acceptable AWIPS design, NWS renegotiated the contract in 1995, basically assuming responsibility for development of all hydrology and meteorology application software and assigning the contractor responsibility for delivering the hardware and systems software and for integrating the entire system. To fulfill its responsibility, NWS established joint NWS and contractor applications software (for example, the software that executes atmospheric and hydrological numerical and statistical models, manipulates satellite and radar graphics, etc.) development teams. NWS’ current plans call for building and integrating AWIPS in seven increments. Thus far, NWS and the contractor have installed a very limited version of AWIPS, the first increment, at three sites to gain some experience in developing, testing, implementing, and operating a limited capability AWIPS. This limited version is also intended to validate selected AWIPS architectural features, such as satellite broadcasts and the central communications and system monitoring “hub.” NWS has begun development of the second increment. NWS estimates that AWIPS will cost $525 million to fully develop and deploy. Deployment of less than the full AWIPS capability to NWS field offices and national centers is now scheduled to begin in 1996. Full AWIPS deployment is scheduled to begin in 1999. AWIPS consists of about 22,000 requirements that have been grouped into about 450 higher-level capabilities. These capabilities are described in the AWIPS System/Segment Specification, commonly referred to as the “A-Spec,” which relates about three-fourths of the capabilities to five broad functional areas. The five functional areas are (1) communications, (2) monitoring and control, (3) processing, (4) display and interaction, and (5) data management. The first two functional areas constitute what NWS calls the AWIPS network segment (that is, the national communications infrastructure). The latter three are referred to by NWS as the AWIPS site segment (that is, the functionality applicable to AWIPS sites). Appendix II provides examples of AWIPS capabilities for each functional area. The remaining one-fourth relate to such capabilities as AWIPS’ performance, security, availability, and flexibility. Figure 1 depicts this AWIPS hierarchy. Investments in information technology, like AWIPS, should be justified on the basis of whether or not all planned system capabilities will make a clear difference in advancing mission efficiency and effectiveness (for example, improved forecasts and warnings). NWS has not done this for AWIPS. According to NWS officials, they have not explicitly linked either AWIPS requirements or higher-level capabilities to mission improvements, and they have no plans to do so because they claim that other requirements reviews, analyses, and validation activities already provide implicit justification for all AWIPS’ proposed capabilities. We disagree. We carefully reviewed these other activities and while we found them to be valuable for different reasons, they were neither intended to nor do they demonstrate that AWIPS’ full array of capabilities will improve NWS mission effectiveness. As a result, NWS risks unnecessarily spending money on AWIPS capabilities that do not satisfy any of its mission improvement goals—better forecasts, fewer field offices, and fewer staff. Office of Management and Budget (OMB) Circular A-130, Management of Federal Information Resources, requires agencies to create and maintain management and technical frameworks that define linkages between mission needs and information technology capabilities. OMB Circular A-109, “Major System Acquisitions,” expands on A-130, requiring federal agencies to make system design decisions based on a review of proposed system functional and performance capabilities contributions to mission needs and program objectives. In effect, agencies developing computer systems, like AWIPS, are to show that proposed system capabilities will produce some mission effectiveness or efficiency gain, like more reliable and timely forecasts or office and staffing reductions. These requirements are consistent with our recent findings on how leading public and private organizations tie technology investments to measurable mission improvements. We found that successful organizations’ information system investment decisions are tied to explicit and quantifiable mission improvements. By doing so, these organizations know that investing in system requirements will make a difference in mission outcomes. Ensuring that proposed system capabilities are justified before expensive software development begins requires validating (that is, proving) that system requirements are anchored in user needs, which in turn are grounded in positive mission impacts. To do less increases the chances of spending money on capabilities that, even though desired by users, will not advance the organization’s effectiveness or efficiency. Accordingly, software development guidance advocates assuring traceability from derived system requirements, designs, and implementations to both original user needs and mission needs. Approaches to validating proposed capabilities include performance modeling and prototyping. According to NWS, planned AWIPS capabilities are necessary and will contribute to NWS’ goals of weather forecast and warning improvements, field office consolidation, and staff reductions. However, NWS officials were unable to produce any analysis or associated documentation to validate this claim. Instead, they presented the results of past AWIPS requirements analysis and definition activities and discussed ongoing requirements validation activities that, while useful in their own right, do not justify AWIPS capabilities on the basis of mission improvements. Each of these requirements review activities are discussed below. The first AWIPS requirements review began in 1984, when NWS initiated the Denver AWIPS Risk Reduction and Requirements Evaluation (DARE) program, an extensive AWIPS prototyping effort to analyze and refine meteorology requirements. Later, NWS augmented the DARE prototype addressed roughly half of AWIPS’ 22,000 requirements. The half that were addressed equates to over three-fourths of AWIPS’ total lines of code. Moreover, even those AWIPS capabilities that were part of the prototypes, with one limited exception, were not explicitly linked to measurable improvements in NWS’ mission effectiveness during these prototyping activities. The one exception is an NWS analysis linking DARE activities occurred at the Norman, Oklahoma, weather office to emulate the operations of a future, modernized weather office. According to NWS, the emulation examined such things as the AWIPS user interface for displaying NEXRAD products, AWIPS’ integration of national weather products (e.g., satellite imagery) with local data (e.g., NEXRAD products), and future weather office staffing levels. However, we reviewed the results of these emulation activities and found no evidence validating AWIPS-specific capabilities on the basis of stated NWS mission goals of better forecasts, fewer weather offices, or less staff. For example, one report concluded that the AWIPS prototype provided capabilities for viewing NEXRAD data that forecasters found “useful,” but does not show the mission outcome of having the capabilities. Another report that was AWIPS-specific concluded that a proposed AWIPS capability known as Interactive Computer Worded Forecast (ICWF), which is later to be replaced by the AWIPS Forecast Preparation System (AFPS), actually decreased rather than increased forecaster productivity and should not be deployed in its current form. In 1994, in response to the contractor’s earlier mentioned failure to produce an AWIPS design, NWS undertook what it refers to as a “functional decomposition” of the 22,000 AWIPS requirements. In effect, NWS placed these requirements into about 450 capability categories. These categories are the foundation of the AWIPS “A-Spec,” which, as mentioned earlier, is the high-level system specification that further combines most of the 450 capabilities into five broad functional areas. (See figure 1.) Clearly, the development of the “A-Spec” was a valuable undertaking in that it translated the 22,000 AWIPS requirements into a smaller, simpler, more understandable set of high-level functions. However, this translation did not, nor was it intended to, link requirements or capabilities to mission improvements. Beginning in 1994 and continuing through today, NWS also has been reviewing all AWIPS requirements to identify any that are, in its words, “archaic,” meaning that they are technologically obsolete, duplicative of other AWIPS requirements, or in need of modification to comply with AWIPS’ system design. Thus far, NWS has reviewed about 900 of the 1,000 requirements it dubbed as potentially extraneous and chose to eliminate about 600. The remaining 100 still need to be evaluated. Again, this requirements “scrub” has been and continues to be worthwhile. However, NWS’ above-cited criteria do not address whether the proposed requirement or capability will produce a measurable mission improvement. At the same time NWS is completing its review of the aforementioned “archaic” requirements, the joint NWS/contractor teams established to develop AWIPS’ application software are examining requirements one last time before building each software module. According to the AWIPS Software Development Plan, the required AWIPS scientific applications in many instances were written several years ago and may still contain ones that are obsolete. However, the NWS official responsible for overseeing the joint development teams stated that the teams’ process for reviewing the requirements does not attempt to validate requirements back to mission improvements. Also, the official stated that the focus of the reviews is on reaching a common understanding on how best to proceed in developing the software module. NWS clearly needs a new, modern system to support its current operations and allow it to take advantage of the vast data streams now available through its new observing systems. However, whether all of the 450 capabilities it plans for AWIPS are necessary to accomplish this is unknown because the process it has followed in developing AWIPS, while providing for traceability between proposed system capabilities and user expressed needs, does not include validating that these capabilities explicitly and measurably advance NWS’ mission efficiency and effectiveness, which NWS has defined in terms of improved forecasts, fewer field offices, and reduced staffing levels. Validating system capabilities to mission outcomes is vital because it confirms prior to costly software development that proposed system capabilities are grounded in and will contribute to NWS’ mission goals. Unless NWS expands ongoing AWIPS requirements review activities to include demonstrating, either quantitatively or qualitatively, that proposed capabilities measurably advance NWS’ mission abilities, it risks spending money to develop capabilities that are not justified. We recommend that the Secretary of Commerce direct the NOAA Assistant Administrator for Weather Services to (1) expand ongoing AWIPS requirements review activities to include validation that proposed capabilities are justified on the basis of mission impact and (2) not implement any of those capabilities that are not validated. At a minimum, such validation should include analyses of data and factual accounts from past and ongoing AWIPS prototype experiences that link those proposed capabilities to stated mission improvement goals. On January 24, 1996, we discussed a draft of this report with NOAA and NWS officials, including the NOAA Associate Administrator for Weather Services, the Deputy Assistant Administrator for Operations, and the Deputy Assistant Administrator for Modernization. In general, these officials did not agree with the report’s conclusions and recommendations, reiterating the NWS positions that were in our draft report. In particular, they stated that extensive AWIPS requirements validation activities have occurred and are ongoing. They also stated that only AWIPS capabilities that are essential to NWS’ mission are being pursued, and that their inability to prove that mission-based requirements validation activities were performed is not sufficient to conclude that AWIPS is not needed. They promised to provide additional documentation to show that AWIPS proposed capabilities are grounded in mission impacts. We agree that extensive validation activities have occurred and are still ongoing, and we give NWS credit for these activities in the report. Unfortunately, NWS’ validation activities have only dealt with a part of the validation equation and have not validated AWIPS capabilities to its stated mission outcomes of better forecasts, fewer field offices, and fewer staff. This is completely at odds with our recent findings on how leading public and private sector organizations base successful technology investments on whether they produce meaningful improvements in the cost, quality, and timeliness of product and service delivery. Further, we neither state nor imply that AWIPS is not needed. Rather, we are saying that NWS is spending hundreds of millions of dollars without knowing whether all AWIPS capabilities will contribute to its stated reasons for investing in the system (improving forecasts and reducing field offices and staffing levels). Restated, while we do not question the need to replace AFOS, we do question whether AWIPS, with all the capabilities that NWS currently envisions it providing, should be that replacement. Unless NWS addresses this question, it risks spending money for capabilities that do not advance its mission performance. Fortunately, NWS has the opportunity to perform this validation activity as part of already ongoing and planned requirements reviews. We strongly encourage NWS to take advantage of this opportunity. We reviewed the additional documentation that NWS officials provided and have included it in the report as further evidence of NWS’ thorough validation of AWIPS capabilities to user needs. However, this documentation does not show that AWIPS capabilities are anchored in mission improvements. We have incorporated other comments made by the officials in the report where appropriate. We are sending copies of this report to the Secretary of Commerce, the Director of the Office of Management and Budget, and interested congressional committees. Copies will also be made available to others upon request. Please call me at (202) 512-6240 if you or your staff have any questions concerning this report. Other major contributors are listed in appendix III. The objective of our review was to determine whether NWS’ process for developing AWIPS has demonstrated that all proposed system capabilities will contribute to promised modernization outcomes—improved forecasts, fewer weather offices, or reducing staffing levels. To determine this, we interviewed program officials and reviewed system development documentation to document past and ongoing steps to validate AWIPS requirements. In particular, we reviewed analyses of AWIPS’ prototyping efforts, memoranda from the 1992 rebaselining of the AWIPS requirements, the System/Segment Specification for the National Weather Service Advanced Weather Interactive Processing System, and the Requirements Traceability Document for the AWIPS Hydrometeorological Computer Software Configuration Item. We also sought program officials explanations of how AWIPS requirements are tied to and will result in improved forecasts, weather office closings, and staff reductions. Concerning ongoing AWIPS requirements reviews, we interviewed NWS staff currently reviewing AWIPS requirements to determine the purpose of the reviews and the criteria being used to assess the requirements. In addition, we interviewed the NWS official overseeing the NWS/contractor teams developing the AWIPS software modules to learn what validation steps and criteria the teams are employing. We provided a draft of this report to the Department of Commerce for comment. On January 24, 1996, we obtained oral comments from NOAA and NWS officials. These comments have been incorporated in the report as appropriate. We performed our work at the AWIPS program office, and at NOAA and NWS headquarters offices in Silver Spring, Maryland, from August 1995 through January 1996 in accordance with generally accepted government auditing standards. This appendix provides examples of AWIPS capabilities for each of the five functional areas. — Acquire data automatically when permitted by the observation systems. — Acquire data through external interfaces at the National Meteorological Center segment, including, polar orbiter data, lightning data, and NEXRAD base and derived products. — Acquire data from network segment external interfaces, including formatted GOES-Next products, NEXRAD summary and winds data, surface and upper air observations, and data from GOES-Next data collection platforms. — Distribute data among AWIPS sites. — Request data (excluding NEXRAD products and satellite imagery) from another AWIPS site. — Specify the distribution control parameters, including data destinations, data to be distributed, data prioritization, and destinations required to acknowledge receipt of a high-priority product. — Disseminate data when requested by an external user. — Disseminate hazardous weather products designated by the user to external users automatically. (continued) — Provide NWS with dedicated, electronic access to the network control facility. — Monitor the integrity and timeliness of data and products that are acquired or disseminated over a central interface. — Notify users when degradations and malfunctions of site equipment and communications interfaces are determined. — Provide an orderly shutdown upon detection of a system failure. — Control the display of information on workstations at other AWIPS sites. — Provide the capability for users to remotely install software at another site. — Provide an interactive, graphical method to allow the user to define two unique alert areas for monitoring NEXRAD data. (continued) — Spatially transform a point and grid of points from one map projection, coordinate system, and grid definition to another by interpolation. — Execute one-dimensional numerical cloud models for forecasting cloud top heights, vertical velocity, and hail sizes. — Execute a numerical model to forecast icing potential for aircraft. — Execute a simplified dam break channel flow model. — Compute tide and water level heights and departures. — Compute extraterrestrial radiation parameters. — Generate combined reflectivity/velocity products from NEXRAD data. — Produce three-dimensional image perspective displays. — Perform image inversions, on a pixel-by-pixel basis. — Perform image-sharpening and edge-enhancement on images. — Display the hydrometeorological field on cross-section and time-section plots using contours, plotted values, and wind symbols. — Produce graphical pilot weather briefing displays, including the depiction of the current and forecast conditions along the flight route plotted on a cross-section context background. — Simultaneously display up to at least eight data windows on each workstation monitor. — Toggle between the components of a combined image. — Zoom in and out on displayed image and graphics products with zoom ratios up to 8:1. — Generate color slides, prints, and transparencies of displayed data. — Step frame-by-frame through an animation loop. — Edit elements of a displayed graphic and its attributes on a workstation image/graphics monitor. (continued) — Retrieve data stored locally. — Specify retrieval criteria, such as all temperatures above a certain threshold value. — Store and retrieve hydrometeorological (for example, satellite data, observational data), cartographic (for example, geopolitical boundaries, topography), site management (for example, region information, maintenance activities), and event data (for example, systems errors, performance parameters). — Archive data at the network segment and all site segments. Rona B. Stillman, Chief Scientist for Computers and Communications Randolph C. Hite, Assistant Director Keith A. Rhodes, Technical Assistant Director David A. Powner, Evaluator-in-Charge Robert C. Reining, Information Systems Analyst Colleen M. Phillips, Information Systems Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the National Weather Service's (NWS) Advanced Weather Interactive Processing System (AWIPS), focusing on whether the proposed AWIPS capabilities will improve weather forecasts and reduce staffing levels and the number of weather offices. GAO found that NWS: (1) needs to replace the outdated systems that its field offices use and has developed 22,000 AWIPS requirements to support 450 capabilities to achieve its mission of improving forecasts and reducing staffing levels and the number of weather offices; (2) may be spending money unnecessarily on capabilities that do not contribute to its mission, since it has not justified whether proposed improvements to AWIPS are necessary to advance its mission efficiently and effectively; and (3) is developing a process to ensure that AWIPS requirements are not duplicative or obsolete, but the process does not trace all planned capabilities to stated mission improvement goals.
You are an expert at summarizing long articles. Proceed to summarize the following text: When we refer to consumer advocacy groups, we are referring to groups that advocate on behalf of consumers and patients. its safety and effectiveness. Class I includes devices with the lowest risk (e.g., tongue depressors, reading glasses, forceps), while class III includes devices with the highest risk (e.g., breast implants, coronary stents). Almost all class I devices and some class II devices (e.g., mercury thermometers, certain adjustable hospital beds) are exempt from premarket notification requirements. Most class III device types are required to obtain FDA approval through the PMA process, the most stringent of FDA’s medical device review processes. The remaining device types are required to obtain FDA clearance or approval through either the 510(k) or PMA processes. If eligible, a 510(k) is filed when a manufacturer seeks a determination that a new device is substantially equivalent to a legally marketed device known as a predicate device. In order to be deemed substantially equivalent (i.e., cleared by FDA for marketing), a new device must have the same technological characteristics and intended use as the predicate device, or have the same intended use and different technological characteristics but still be demonstrated to be as safe and effective as the predicate device without raising new questions of safety and effectiveness. Most device submissions filed each year are 510(k)s. For example, of the more than 13,600 device submissions received by FDA in FYs 2008 through 2010, 88 percent were 510(k)s. The medical device performance goals were phased in during the period covered by MDUFMA (the FYs 2003 through 2007 cohorts) and were updated for MDUFA. Under MDUFA, FDA’s goal is to complete the review process for 90 percent of the 510(k)s in a cohort within 90 days of submission (known as the Tier 1 goal) and to complete the review process for 98 percent of the cohort within 150 days (the Tier 2 goal).(See table 1 for the 510(k) performance goals for the FYs 2003 through 2011 cohorts). FDA may take any of the following actions on a 510(k) after completing its review: issue an order declaring the device substantially equivalent; issue an order declaring the device not substantially equivalent; or advise the submitter that the 510(k) is not required (i.e., the product is not regulated as a device or the device is exempt from premarket notification requirements). Each of these actions ends the review process for a submission. A sponsor’s withdrawal of a submission also ends the review process. Alternatively, FDA may “stop the clock” on a 510(k) review by sending a letter asking the sponsor to submit additional information (known as an AI letter). This completes a review cycle but does not end the review process. The clock will resume (and a new review cycle will begin) when FDA receives a response from the sponsor. As a result, FDA may meet its 510(k) performance goals even if the time to final decision (FDA review time plus time spent waiting for the sponsor to respond to FDA’s requests for additional information) is longer than the time frame allotted for the performance goal. For example, a sponsor might have submitted a 510(k) on March 1, 2009, to start the review process. If FDA sent an AI letter on April 1, 2009 (after 31 days on the clock), the sponsor provided a response on June 1, 2009 (after an additional 61 days off the clock), and FDA issued a final decision on June 11, 2009 (10 more days on the clock), then the FDA review time counted toward the MDUFA performance goals would be 41 days (FDA’s on-the-clock time). FDA would have met both the Tier 1 (90 day) and Tier 2 (150 day) time frames for that device even though the total number of calendar days (on- and off-the-clock) from beginning the review to a final decision was 102 days. (See table 2 for a comparison of FDA review time and time to final decision.) FDA tracks the time to final decision and reports on it in the agency’s annual reports to Congress on the medical device user fee program. A PMA is filed when a device is not substantially equivalent to a predicate device or has been classified as a class III PMA device (when the risks associated with the device are considerable). The PMA review process is the most stringent type of medical device review process required by FDA, and user fees are much higher for PMAs than for 510(k)s. PMAs are designated as either original or expedited. FDA considers a device eligible for expedited review if it is intended to (a) treat or diagnose a life- threatening or irreversibly debilitating disease or condition and (b) address an unmet medical need.submissions to determine which are appropriate for expedited review, regardless of whether a company has identified its device as a potential candidate for this program. FDA assesses all medical device To meet the MDUFA goals, FDA must complete its review of 60 percent of the original PMAs in a cohort within 180 days of submission (Tier 1) and 90 percent within 295 days (Tier 2). For expedited PMAs, 50 percent of a cohort must be completed within 180 days (Tier 1) and 90 percent within 280 days (Tier 2). (See table 3 for the PMA performance goals for the FYs 2003 through 2011 cohorts.) The various actions FDA may take during its review of a PMA are the following: major deficiency letter; not approvable letter; and denial order. The major deficiency letter is the only one of these actions that does not end the review process for purposes of determining whether FDA met the MDUFA performance goal time frame for a given submission. As with the AI letter in a 510(k) review, FDA can stop the clock during the PMA review process by sending a major deficiency letter (ending a review cycle) and resume it later upon receiving a response from the manufacturer. In contrast, taking one of the other four actions permanently stops the clock, meaning any further review that occurs is excluded from the calculation of FDA review time. In addition, the approval order and denial order are also considered final decisions and end FDA’s review of a PMA completely. A sponsor’s withdrawal of a submission also ends the review process. FDA’s review of medical device submissions has been discussed in recent congressional hearings, meetings between FDA and stakeholders about the medical device user fee program reauthorization, and published reports. In addition, in August 2010, FDA released reports which described the results of two internal assessments conducted by FDA of its medical device review programs. In January 2011, FDA released a plan of action that included 25 steps FDA intends to take to address the issues identified in these assessments. For FYs 2003 through 2010, FDA met all Tier 1 and Tier 2 performance goals for 510(k)s. In addition, FDA review time for 510(k)s decreased slightly during this period, but time to final decision increased substantially. The average number of review cycles and FDA’s requests for additional information for 510(k) submissions also increased during this period. FDA met all Tier 1 performance goals for the completed 510(k) cohorts that had Tier 1 goals in place. The percentage of 510(k)s reviewed within 90 days (the current Tier 1 goal time frame) exceeded 90 percent for the FYs 2005 through 2010 cohorts (see fig. 1.) Although the 510(k) cohort for FY 2011 was still incomplete at the time we received FDA’s data, FDA was exceeding the Tier 1 goal for those submissions on which it had taken action. FDA’s performance varied for 510(k) cohorts prior to the years that the Tier 1 goals were in place but was always below the current 90 percent goal. FDA met the Tier 2 goals for all three of the completed cohorts that had Tier 2 goals in place. Specifically, FDA met the goal of reviewing 98 percent of submissions within 150 days for the FYs 2008, 2009, and 2010 cohorts (see fig. 2.) Additionally, although the 510(k) cohort for FY 2011 was still incomplete at the time we received FDA’s data, FDA was exceeding the Tier 2 goal for those submissions on which it had taken action. FDA’s performance for 510(k) cohorts prior to the years that the Tier 2 goals were in place was generally below the current 98 percent goal. While the average FDA review time for 510(k) submissions decreased slightly from the FY 2003 cohort to the FY 2010 cohort, the time to final decision increased substantially. Specifically, the average number of days FDA spent on the clock reviewing a 510(k) varied somewhat but overall showed a small decrease from 75 days for the FY 2003 cohort to 71 days for the FY 2010 cohort (see fig. 3). However, when we added off-the- clock time (where FDA waited for the sponsor to provide additional information) to FDA’s on-the-clock review time, the resulting time to final decision decreased slightly from the FY 2003 cohort to the FY 2005 cohort before increasing 61 percent—from 100 days to 161 days—from the FY 2005 cohort through the FY 2010 cohort. FDA officials told us that the only alternative to requesting additional information is for FDA to reject the submission. The officials stated that as a result of affording sponsors this opportunity to respond, the time to final decision is longer but the application has the opportunity to be approved. Additionally, although the 510(k) cohort for FY 2011 was still incomplete at the time we received FDA’s data, the average FDA review time and time to final decision were lower in FY 2011 for those submissions on which it had taken action. The average number of review cycles per 510(k) increased substantially (39 percent) from FYs 2003 through 2010, rising from 1.47 cycles for the FY 2003 cohort to 2.04 cycles for the FY 2010 cohort (see fig. 4). In addition, the percentage of 510(k)s receiving a first-cycle decision of substantially equivalent (i.e., cleared by FDA for marketing) decreased from 54 percent for the FY 2003 cohort to 20 percent for the FY 2010 cohort, while the percentage receiving first-cycle AI requests exhibited a corresponding increase. (See fig. 5.) The average number of 510(k) submissions per year remained generally steady during this period. Although the 510(k) cohort for FY 2011 was still incomplete at the time we received FDA’s data, of the first-cycle reviews that had been completed, the percentage of submissions receiving a first-cycle decision of substantially equivalent was slightly higher than for the FY 2010 cohort (21.2 percent in FY 2011 compared with 20.0 percent in FY 2010). In addition, the percentage receiving a first-cycle AI request was lower (68.2 percent for FY 2011 compared with 77.0 for FY 2010). The percentage of 510(k)s that received a final decision of substantially equivalent also decreased in recent years—from a high of 87.9 percent for the FY 2005 cohort down to 75.1 percent for the FY 2010 cohort. The percentage of 510(k)s receiving a final decision of not substantially equivalent increased for each cohort from FYs 2003 through 2010, rising from just over 2.9 percent to 6.4 percent. (See fig. 6.) For FYs 2003 through 2010, FDA met most of the goals for original PMAs but fell short on most of the goals for expedited PMAs. In addition, FDA review time and time to final decision for both types of PMAs generally increased during this period. Finally, the average number of review cycles increased for certain PMAs while the percentage of PMAs approved after one review cycle generally decreased. Since FY 2003, FDA met the original PMA performance goals for four of the seven completed cohorts that had goals in place, but met the goals for only two of the seven expedited PMA cohorts with goals. Specifically, FDA met its Tier 1 performance goals for original PMAs for all three of the completed original PMA cohorts that had such goals in place, with the percentage increasing from 56.8 percent of the FY 2007 cohort to 80.0 percent of the FY 2009 cohort completed on time. (See fig. 7.) While the FY 2010 and 2011 cohorts are still incomplete, FDA is exceeding the goals for those submissions on which it has taken action. FDA’s performance had declined steadily in the years immediately before implementation of these goals—from 67.1 percent of the FY 2000 cohort to 34.5 percent of the FY 2006 cohort completed within 180 days. FDA’s performance in meeting the Tier 2 performance goals for original PMAs fell short of the goal for three of the four completed cohorts during the years that these goals were in place. FDA met the MDUFMA Tier 2 performance goal (320 days) for the FY 2006 original PMA cohort but not for the FY 2007 cohort, and did not meet the MDUFA Tier 2 performance goal (295 days) for either of the completed cohorts (FYs 2008 and 2009) to which the goal applied (see fig. 8). While the FYs 2010 and 2011 original PMA cohorts are still incomplete, FDA is exceeding the MDUFA FDA’s Tier 2 goals for those submissions on which it has taken action.performance varied for original PMA cohorts prior to the years that the Tier 2 goals were in place but was always below the current goal to have 90 percent reviewed within 295 days. For expedited PMAs, FDA met the Tier 1 and Tier 2 performance goals for only two of the seven completed cohorts for which the goals were in effect. FDA met the Tier 1 (180-day) goal for only one of the two completed cohorts during the years the goal has been in place, meeting the goal for the FY 2009 cohort but missing it for the FY 2008 cohort (see fig. 9). FDA’s performance varied for cohorts prior to the years that the Tier 1 expedited PMA goals were in place but was below the current goal of 50 percent in all but 1 year. FDA’s performance in meeting the Tier 2 performance goals for expedited PMAs fell short of the goal for four of the five completed cohorts during the years that these goals were in place. FDA met the MDUFMA Tier 2 performance goal (300 days) for the FY 2005 cohort but not for the FY 2006 or 2007 cohorts, and did not meet the MDUFA Tier 2 performance goal (280 days) for either of the completed cohorts (FY 2008 and 2009) to which the goal applied (see fig. 10). FDA’s performance varied for expedited PMA cohorts prior to the years that the Tier 2 goals were in place but always fell below the current goal to have 90 percent reviewed within 280 days. FDA review time for both original and expedited PMAs was highly variable but generally increased across our analysis period, while time to final decision also increased for original PMAs. Specifically, average FDA review time for original PMAs increased from 211 days in the FY 2003 cohort (the first year that user fees were in effect) to 264 days in the FY 2008 cohort, then fell in the FY 2009 cohort to 217 days (see fig. 11). When we added off-the-clock time (during which FDA waited for the sponsor to provide additional information or correct deficiencies in the submission), average time to final decision for the FYs 2003 through 2008 cohorts fluctuated from year to year but trended upward from 462 days for the FY 2003 cohort to 627 days for the FY 2008 cohort. The results for expedited PMAs fluctuated even more dramatically than for original PMAs—likely due to the small number of submissions (about 7 per year on average). Average FDA review time for expedited PMAs generally increased over the period that user fees have been in effect, from 241 days for the FY 2003 cohort to 356 days for the FY 2008 cohort, then fell to 245 days for the FY 2009 cohort (see fig. 12). The average time to final decision for expedited PMAs was highly variable but overall declined somewhat during this period, from 704 days for the FY 2003 cohort to 545 days for the FY 2009 cohort. The average number of review cycles per original PMA increased 27.5 percent from 1.82 in the FY 2003 cohort (the first year that user fees were in effect) to 2.32 cycles in the FY 2008 cohort. For expedited PMAs, the average number of review cycles per submission was fairly steady at approximately 2.5 cycles until the FY 2004 cohort, then peaked at 4.0 in the FY 2006 cohort before decreasing back to 2.5 cycles in the FY 2009 cohort. We found nearly identical trends when we examined the subsets of original and expedited PMAs that received a final FDA decision of approval. In addition, the percentage of original PMAs receiving a decision of approval at the end of the first review cycle fluctuated from FYs 2003 through 2009 but generally decreased—from 16 percent in the FY 2003 cohort to 9.8 percent in the FY 2009 cohort. Similarly, the percentage receiving a first-cycle approvable decision decreased from 12 percent in the FY 2003 cohort to 2.4 percent in the FY 2009 cohort. The percentage of expedited PMAs receiving first-cycle approval fluctuated from year to year, from 0 percent in 5 of the years we examined to a maximum of 25 percent in FY 2008. The percentage of original PMAs that ultimately received approval from FDA fluctuated from year to year but exhibited an overall decrease for the completed cohorts from FYs 2003 through 2008. Specifically, 74.0 percent of original PMAs in the FY 2003 cohort were ultimately approved, compared to 68.8 percent of the FY 2008 cohort. The percentage of expedited PMAs that were ultimately approved varied significantly from FYs 2003 through 2009, from a low of 0 percent in the FY 2007 cohort to a high of 100 percent in the FY 2006 cohort. The industry groups and consumer advocacy groups we interviewed noted a number of issues related to FDA’s review of medical device submissions. The most commonly mentioned issue raised by industry and consumer advocacy stakeholder groups was insufficient communication between FDA and stakeholders throughout the review process. Industry stakeholders also noted a lack of predictability and consistency in reviews and an increase in time to final decision. Consumer advocacy group stakeholders noted issues related to inadequate assurance of the safety and effectiveness of approved or cleared devices. FDA is taking steps that may address many of these issues. Most of the three industry and four consumer advocacy group stakeholders that we interviewed told us that there is insufficient communication between FDA and stakeholders throughout the review process. For example, four stakeholders noted that FDA does not clearly communicate to stakeholders the regulatory standards that it uses to evaluate submissions. In particular, industry stakeholders noted problems with the regulatory guidance documents issued by FDA. These stakeholders noted that these guidance documents are often unclear, out of date, and not comprehensive. Stakeholders also noted that after sponsors submit their applications to FDA, insufficient communication from FDA prevents sponsors from learning about deficiencies in their submissions early in FDA’s review. According to one of these stakeholders, if FDA communicated these deficiencies earlier in the process, sponsors would be able to correct them and would be less likely to receive a request for additional information. Two consumer advocacy group stakeholders also noted that FDA does not sufficiently seek patient input during reviews. One stakeholder noted that it is important for FDA to incorporate patient perspectives into its reviews of medical devices because patients might weigh the benefits and risks of a certain device differently than FDA reviewers. FDA has taken or plans to take several steps that may address issues with the frequency and quality of its communications with stakeholders, including issuing new guidance documents, improving the guidance development process, and enhancing interactions between FDA and stakeholders during reviews. For example, in December 2011, FDA released draft guidance about the regulatory framework, policies, and practices underlying FDA’s 510(k) review in order to enhance the transparency of this program. In addition, FDA implemented a tracking system and released a standard operating procedure (SOP) for developing guidance documents for medical device reviews to provide greater clarity, predictability, and efficiency in this process. FDA also created a new staff position to oversee the guidance development process. Additionally, according to an overview of recent FDA actions to improve its device review programs, FDA is currently enhancing its interactive review process for medical device reviews by establishing performance goals for early and substantive interactions between FDA and sponsors during reviews. This overview also notes that FDA is currently working with a coalition of patient advocacy groups on establishing mechanisms for obtaining reliable information on patient perspectives during medical device reviews. The three industry stakeholders that we interviewed also told us that there is a lack of predictability and consistency in FDA’s reviews of device submissions. For example, two stakeholders noted that review criteria sometimes change after a sponsor submits an application. In particular, one of these stakeholders noted that criteria sometimes change when the FDA reviewer assigned to the submission changes during the review. Additionally, stakeholders noted that there is sometimes inconsistent application of criteria across review divisions or across individual reviewers. Stakeholders noted that enhanced training for reviewers and enhanced supervisory oversight could help resolve inconsistencies in reviews and increase predictability for sponsors. In the two internal assessments of its device review programs that FDA released in August 2010, the agency found that insufficient predictability in its review programs was a significant problem. FDA has taken steps that may address issues with the predictability and consistency of its reviews of device submissions, including issuing new SOPs for reviews and enhancing training for FDA staff. For example, in June 2011, FDA issued an SOP to standardize the practice of quickly issuing written notices to sponsors to inform them about changes in FDA’s regulatory expectations for medical device submissions. FDA also recently developed an SOP to assure greater consistency in the review of device submissions when review staff change during the review.April 2010, FDA began a reviewer certification program for new FDA Additionally, in reviewers designed to improve the consistency of reviews. According to the overview of recent FDA actions to improve its device review programs, FDA also plans to implement an experiential learning program for new reviewers to give them a better understanding of how medical devices are designed, manufactured, and used. The three industry stakeholders we interviewed told us that the time to final decision for device submissions has increased in recent years. This is consistent with our analysis, which showed that the average time to final decision has increased for completed 510(k) and original PMA cohorts since FY 2003. Additionally, stakeholders noted that FDA has increased the number of requests for additional information, which our analysis also shows. Stakeholders told us they believe the additional information being requested is not always critical for the review of the submission. Additional information requests increase the time to final decision but not necessarily the FDA review time because FDA stops the review clock when it requests additional information from sponsors. Two of the stakeholders stated that reviewers may be requesting additional information more often due to a culture of increased risk aversion at FDA or because they want to stop the review clock in order to meet performance goals. According to FDA, the most significant contributor to the increased number of requests for additional information—and therefore increased time to final decision—is the poor quality of submissions received from sponsors. In July 2011, FDA released an analysis it conducted of review According to FDA, in over 80 percent times under the 510(k) program.of the reviews studied for this analysis, reviewers asked for additional information from sponsors due to problems with the quality of the submission. FDA officials told us that sending a request for additional information is often the only option for reviewers besides issuing a negative decision to the sponsor. FDA’s analysis also found that 8 percent of its requests for additional information during the first review cycle were inappropriate. Requests for additional information were deemed inappropriate if FDA requested additional information or data for a 510(k) that (1) were not justified, (2) were not permissible as a matter of federal law or FDA policy, or (3) were unnecessary to make a substantial equivalence determination. FDA has taken steps that may address issues with the number of inappropriate requests for additional information. For example, the overview of recent FDA actions indicates the agency is developing an SOP for requests for additional information that clarifies when these requests can be made for 510(k)s, the types of requests that can be made, and the management level at which the decision must be made. Three of the four consumer advocacy group stakeholders with whom we spoke stated that FDA is not adequately ensuring the safety and effectiveness of the devices it approves or clears for marketing. One of these stakeholders told us that FDA prioritizes review speed over safety and effectiveness. Two stakeholders also noted that the standards FDA uses to approve or clear devices are lower than the standards that FDA uses to approve drugs, particularly for the 510(k) program. Two stakeholders also expressed concern that devices reviewed under the 510(k) program are not always sufficiently similar to their predicates and that devices whose predicates are recalled due to safety concerns do not have to be reassessed to ensure that they are safe. Finally, three stakeholders told us that FDA does not gather enough data on long-term device safety and effectiveness through methods such as postmarket analysis and device tracking. These issues are similar to those raised elsewhere, such as a public meeting to discuss the reauthorization of the medical device user fee program, a congressional hearing, and an Institute of Medicine (IOM) report. For example, during a September 14, 2010, public meeting to discuss the reauthorization, consumer advocacy groups—including two of those we interviewed for our report—urged the inclusion of safety and effectiveness improvements in the reauthorization, including raising premarket review standards for devices and increasing postmarket surveillance. Additionally, during an April 13, 2011, congressional hearing, another consumer advocacy group expressed concerns about FDA’s 510(k) review process and recalls of high-risk devices that were cleared through this process. Finally, in July 2011, IOM released a report summarizing the results of an independent evaluation of the 510(k) program. FDA had requested that IOM conduct this evaluation to determine whether the 510(k) program optimally protects patients and promotes innovation. IOM concluded that clearance of a 510(k) based on substantial equivalence to a predicate device is not a determination that the cleared device is safe or effective. FDA has taken or plans to take steps that may address issues with the safety and effectiveness of approved and cleared devices, including evaluating the 510(k) program and developing new data systems. For example, FDA analyzed the safety of 510(k) devices cleared on the basis of multiple predicates by investigating an apparent association between these devices and increased reports of adverse events. FDA concluded that no clear relationship exists. FDA also conducted a public meeting to discuss the recommendations proposed in the IOM report in September 2011. FDA is also developing a device identification system that will allow FDA to better track devices that are distributed to patients, as well as an electronic reporting system that will assist with tracking and analyzing adverse events in marketed devices. While FDA has met most of the goals for the time frames within which the agency was to review and take action on 510(k) and PMA device submissions, the time that elapses before a final decision has been increasing. This is particularly true for 510(k) submissions, which comprise the bulk of FDA device reviews. Stakeholders we spoke with point to a number of issues that the agency could consider in addressing the cause of these time increases. FDA tracks and reports the time to final decision in its annual reports to Congress on the medical device user fee program, and its own reports reveal the same pattern we found. In its July 2011 analysis of 510(k) submissions, FDA concluded that reviewers asked for additional information from sponsors—thus stopping the clock on FDA’s review time while the total time to reach a final decision continued to elapse—mainly due to problems with the quality of the submission. FDA is taking steps that may address the increasing time to final decision. It is important for the agency to monitor the impact of those steps in ensuring that safe and effective medical devices are reaching the market in a timely manner. HHS reviewed a draft of this report and provided written comments, which are reprinted in appendix III. HHS generally agreed with our findings and noted that FDA has identified some of the same performance trends in its annual reports to Congress. HHS noted that because the total time to final decision includes the time industry incurs in responding to FDA’s concerns, FDA and industry bear shared responsibility for the increase in this time and will need to work together to achieve improvement. HHS also noted that in January 2011, FDA announced 25 specific actions that the agency would take to improve the predictability, consistency, and transparency of its premarket medical device review programs. Since then, HHS stated, FDA has taken or is taking actions designed to create a culture change toward greater transparency, interaction, collaboration, and the appropriate balancing of benefits and risk; ensure predictable and consistent recommendations, decision making, and application of the least burdensome principle; and implement efficient processes and use of resources. HHS also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the Commissioner of FDA, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. 2002 (MDUFMA) (MDUFA) Cycles that were currently in progress at the time we received FDA’s data were included in this analysis. The average number of review cycles for the FY 2011 cohort may increase as those reviews are completed but will not decrease. — We treated PMA submissions as meeting the time frame for a given performance goal if they were reviewed within the goal time plus any extension to the goal time that may have been made. The only reason the goal time can be extended is if the sponsor submits a major amendment to the submission on its own initiative (i.e., unsolicited by FDA). The FYs 2010 and 2011 original PMA cohorts were considered still incomplete. Specifically, for 18.5 percent of the FY 2010 original PMA cohort and 48.8 percent of the FY 2011 cohort, FDA had not yet made a decision that would permanently stop the review clock for purposes of determining whether FDA met its performance goals (i.e., an approval, approvable, not approvable, withdrawal, or denial) at the time we received FDA’s data; this includes reviews by CBER through September 30, 2011, and reviews by CDRH through December 1, 2011. As a result, it was too soon to tell what the final results for these cohorts would be. It is possible that some of the reviews taking the most time were among those not completed when we received FDA’s data. The percentage of original PMAs reviewed within 180 days for the FY 2010 and FY 2011 cohorts may increase or decrease as those reviews are completed; the number reviewed within 180 days and the number and percentage reviewed within 320 days and within 295 days may decrease as those reviews are completed. Only original PMAs that had received a decision permanently stopping the review clock were used to determine the number and percentage of original PMAs reviewed within 180 days, within 320 days, and within 295 days. Cycles that were currently in progress at the time we received FDA’s data were included in this analysis. The average number of review cycles for the incomplete cohorts may increase as those reviews are completed but will not decrease. This analysis includes only those original PMAs for which FDA or the sponsor had made a final decision; this includes reviews by CBER through September 30, 2011, and reviews by CDRH through December 1, 2011. For this analysis, the FYs 2009 through 2011 original PMA cohorts were considered still incomplete. Specifically, 22 percent of the FY 2009 original PMA cohort, 46.3 percent of the FY 2010 cohort, and 65.1 percent of the FY 2011 cohort had not yet received a final decision. As a result, it was too soon to tell what the final results for these cohorts would be. It is possible that some of the reviews taking the most time were among those not completed when we received FDA’s data. The percentages of final decisions that were approval, denial, or withdrawal and the average time to final decision for original PMAs not meeting the 295-day time frame for the FYs 2009 through 2011 cohorts may increase or decrease as those reviews are completed. The average number of review cycles for the FYs 2009 through 2011 cohorts may increase as those reviews are completed but will not decrease. For the FYs 2010 through 2011 cohorts, there were no original PMAs that had received a final decision that did not meet the 295-day time frame. — We treated PMA submissions as meeting the time frame for a given performance goal if they were reviewed within the goal time plus any extension to the goal time that may have been made. The only reason the goal time can be extended is if the sponsor submits a major amendment to the submission on its own initiative (i.e., unsolicited by FDA). The FYs 2010 and 2011 expedited PMA cohorts were considered still incomplete. Specifically, 33 percent of the FY 2010 expedited PMA cohort and 71.4 percent of the FY 2011 cohort had not yet received a final decision; this includes reviews by CBER through September 30, 2011, and reviews by CDRH through December 1, 2011. Additionally, for 16.7 percent of the FY 2010 expedited PMA cohort and 71.4 percent of the FY 2011 cohort, FDA had not yet made a decision that would permanently stop the review clock for purposes of determining whether FDA met its performance goals (i.e., an approval, approvable, not approvable, withdrawal, or denial) at the time we received FDA’s data. As a result, it was too soon to tell what the final results for these cohorts would be. It is possible that some of the reviews taking the most time were among those not completed when we received FDA’s data. The percentage of expedited PMAs reviewed within 180 days for the FY 2010 and FY 2011 cohorts may increase or decrease as those reviews are completed; the number reviewed within 180 days and the number and percentage reviewed within 300 days and within 280 days may decrease as those reviews are completed. The percentages of final decisions that were approval, denial, or withdrawal and the average time to final decision for the FYs 2010 through 2011 cohorts may increase or decrease as those reviews are completed. The average number of review cycles for the FYs 2010 through 2011 cohorts may increase as those reviews are completed but will not decrease. Fiscal years for which there was no corresponding expedited PMA performance goal are denoted with a dash (—). “n/a” denotes not applicable. In these years, there was no corresponding expedited PMA performance goal and therefore no determination of whether the goal was met. For the FYs 2010 through 2011 cohorts, there were no expedited PMAs that had received a final decision that did not meet the 280-day time frame. FDA centers and offices Center for Devices and Radiological Health (CDRH) Office of Management Operations (OSM/OMO) Office of Information Technology (OIT) Office of Science and Engineering Laboratories (OST/OSEL) Office of Communication Education and Radiation Programs (OHIP/OCER) Office of Surveillance and Biometrics (OSB) Office of In Vitro Diagnostics (OIVD) Committee Conference Management (CCM) Center for Biologics Evaluation and Research (CBER) Center Director’s Office, Office of Management (OM), Office of Information Management (OIM), and Office of Communication, Outreach, and Development (OCOD) Office of Cellular, Tissue & Gene Therapies Office of Vaccines Research & Review Office of Therapeutics Research & Review Office of Biostatistics & Epidemiology Office of Compliance & Biologics Quality Office of Regulatory Affairs (ORA) Office of the Commissioner (OC) Shared Service (SS) OCD includes Medical Device Fellowship Program employees even though the Fellows were assigned to work throughout CDRH. OIT was included in the OMO FTE total prior to FY 2008. OIVD did not exist prior to FY 2004. Also, the Radiology Devices Branch was moved from ODE to OIVD between FY 2009 and FY 2010. CCM was included in the OMO FTE total prior to FY 2008. Shared Service FTE were not separated from the center FTE until FY 2004. In addition to the contact named above, Robert Copeland, Assistant Director; Carolyn Fitzgerald; Cathleen Hamann; Karen Howard; Hannah Marston Minter; Lisa Motley; Aubrey Naffis; Michael Rose; and Rachel Schulman made key contributions to this report.
The Food and Drug Administration (FDA) within the Department of Health and Human Services (HHS) is responsible for overseeing the safety and effectiveness of medical devices sold in the United States. New devices are generally subject to FDA review via the 510(k) process, which determines if a device is substantially equivalent to another legally marketed device, or the more stringent premarket approval (PMA) process, which requires evidence providing reasonable assurance that the device is safe and effective. The Medical Device User Fee and Modernization Act of 2002 (MDUFMA) authorized FDA to collect user fees from the medical device industry to support the process of reviewing device submissions. FDA also committed to performance goals that include time frames within which FDA is to take action on a proportion of medical device submissions. MDUFMA was reauthorized in 2007. Questions have been raised as to whether FDA is sufficiently meeting the performance goals and whether devices are reaching the market in a timely manner. In preparation for reauthorization, GAO was asked to (1) examine trends in FDA’s 510(k) review performance from fiscal years (FY) 2003-2010, (2) examine trends in FDA’s PMA review performance from FYs 2003-2010, and (3) describe stakeholder issues with FDA’s review processes and steps FDA is taking that may address these issues. To do this work, GAO examined FDA medical device review data, reviewed FDA user fee data, interviewed FDA staff regarding the medical device review process and FDA data, and interviewed three industry groups and four consumer advocacy groups. Even though FDA met all medical device performance goals for 510(k)s, the elapsed time from submission to final decision has increased substantially in recent years. This time to final decision includes the days FDA spends reviewing a submission as well as the days FDA spends waiting for a device sponsor to submit additional information in response to a request by the agency. FDA review time excludes this waiting time, and FDA review time alone is used to determine whether the agency met its performance goals. Each fiscal year since FY 2005 (the first year that 510(k) performance goals were in place), FDA has reviewed over 90 percent of 510(k) submissions within 90 days, thus meeting the first of two 510(k) performance goals. FDA also met the second goal for all 3 fiscal years it was in place by reviewing at least 98 percent of 510(k) submissions within 150 days. Although FDA has not yet completed reviewing all of the FY 2011 submissions, the agency was exceeding both of these performance goals for those submissions on which it had taken action. Although FDA review time decreased slightly from FY 2003 through FY 2010, the time that elapsed before FDA’s final decision increased substantially. Specifically, from FY 2005 through FY 2010, the average time to final decision for 510(k)s increased 61 percent, from 100 days to 161 days. FDA was inconsistent in meeting performance goals for PMA submissions. FDA designates PMAs as either original or expedited; those that FDA considers eligible for expedited review are devices intended to (a) treat or diagnose life-threatening or irreversibly debilitating conditions and (b) address an unmet medical need. While FDA met the performance goals for original PMA submissions for 4 out of 7 years the goals were in place, it met those goals for expedited PMA submissions only twice out of 7 years. FDA review time and time to final decision for both types of PMAs were highly variable but generally increased in recent years. For example, the average time to final decision for original PMAs increased from 462 days for FY 2003 to 627 days for FY 2008 (the most recent year for which complete data are available). The three industry groups and four consumer advocacy groups GAO interviewed noted a number of issues related to FDA’s review of medical device submissions. The four issues most commonly raised by stakeholders included (1) insufficient communication between FDA and stakeholders throughout the review process, (2) a lack of predictability and consistency in reviews, (3) an increase in time to final decision, and (4) inadequate assurance of the safety and effectiveness of approved or cleared devices. FDA is taking steps—including issuing new guidance documents, enhancing reviewer training, and developing an electronic system for reporting adverse events—that may address many of these issues. It is important for the agency to monitor the impact of those steps in ensuring that safe and effective medical devices are reaching the market in a timely manner. In commenting on a draft of this report, HHS generally agreed with GAO’s findings and noted that FDA has identified some of the same performance trends in its annual reports to Congress. HHS also called attention to the activities FDA has undertaken to improve the medical device review process.
You are an expert at summarizing long articles. Proceed to summarize the following text: For the past several decades, computer systems have typically used two digits to represent the year, such as “98” for 1998, in order to conserve electronic data storage and reduce operating costs. In this format, however, 2000 is indistinguishable from 1900 because both are represented as “00.” As a result, if not modified, systems or applications that use dates or perform date- or time-sensitive calculations may generate incorrect results beyond 1999. SSA has been anticipating the change of century since 1989, initiating an early response to the potential crisis. It made significant early progress in assessing and renovating mission-critical mainframe systems—those necessary to prevent the disruption of benefits —and has been a leader among federal agencies. Yet as our report of last October indicated, three key risks remained, mainly stemming from the large degree to which SSA interfaces with other entities in the sharing of information. One major risk concerned Year 2000 compliance of the 54 state Disability Determination Services (DDS) that provide vital support to the agency in administering SSA’s disability programs. The second major risk concerned data exchanges, ensuring that information obtained from outside sources—such as other federal agencies, state agencies, and private businesses—was not “corrupted” by data being passed from systems that were not Year 2000 compliant. SSA exchanges data with thousands of such sources. Third, such risks were compounded by the lack of contingency plans to ensure business continuity in the event of systems failure. Our report made several specific recommendations to mitigate these risks. These included (1) expeditious completion of the assessment of mission-critical systems at state DDS offices and the use of those results to establish specific plans of action, (2) stronger oversight by SSA of DDS Year 2000 activities, (3) discussion of the status of DDS Year 2000 activities in SSA’s quarterly reports to the Office of Management and Budget (OMB), (4) expeditious completion of SSA’s Year 2000 compliance coordination with all data exchange partners, and (5) development of specific contingency plans that articulate clear strategies for ensuring the continuity of core business functions. SSA agreed with all of our recommendations, and actions to complete them are underway. We understand that the states are in various stages of addressing the Year 2000 problem, but note that SSA has begun to monitor these activities; among other things, it is requiring biweekly status reports from the DDSs. Further, as of this week, the agency planned to have a contingency plan available at the end of the month. The resources that SSA plans to invest in acquiring IWS/LAN are enormous: Over 7 years the agency plans to spend about $1 billion during phase I to replace its present computer terminals with “intelligent” workstations and local area networks. As of March 1, SSA had completed installation of about 30,000 IWSs and 800 LANs, generally meeting or exceeding its phase I schedule. The basic intelligent workstation that SSA is procuring includes a (1) 15-inch color display monitor, (2) 100-megahertz Pentium workstation with 32 megabytes (MB) of random access memory, (3) 1.2-gigabyte hard (fixed) disk drive, and (4) 16-bit network card with adaptation cable. Preliminary testing has indicated that the IWS/LAN workstation random access memory will need to be upgraded from 32 MB to at least 64 MB. Last year SSA’s contractor, Unisys Corporation, submitted a proposal to upgrade to a processing speed higher than 100 megahertz at additional cost. Unisys noted that it was having difficulty in obtaining 100-megahertz workstations. Although personal computers available in today’s market are about three times this speed, SSA stated that the 100-megahertz processing speed does meet its current needs. The agency is, however, continuing to discuss this issue with Unisys. As the expected time period for implementation of IWS/LAN will span the change of century, it is obviously important that all components be Year 2000 compliant. SSA’s contract with Unisys does not, however, contain such a requirement. Moreover, SSA has acknowledged, and we have validated, that some of the earlier workstations that it acquired are not Year 2000 compliant. However, SSA maintains—and we have confirmed—that the operating system it has selected for IWS/LAN, Windows NT, corrects the particular Year 2000-related problem. SSA has also said that it is now testing all new hardware and software, including equipment substitutions proposed by Unisys, to ensure Year 2000 compliance before site installation. Phase II is intended to build upon acquisition of the initial IWS/LAN infrastructure, adding new hardware and software—such as database engines, scanners, and bar code readers—to support future process redesign initiatives. Contract award for phase II is planned for fiscal year 1999, with site installations between fiscal years 1999 and 2001. We have not identified any significant problems in SSA’s installation of IWS/LAN equipment at its field offices to date, and the agency has taken steps to minimize adverse impact on service to the public while installation takes place. Some state DDSs, however, have recently raised concerns about lack of control over their networks and inadequate response time on IWS/LAN service calls, resulting in some disruption to their operations. SSA currently maintains central control. Under this arrangement, problems with local equipment must be handled by SSA’s contractor, even though many DDSs feel they have sufficient technical staff to do the job. Because of this issue, states have said that they want SSA to pilot test IWS/LAN in one or more DDS offices to evaluate options that would allow states more flexibility in managing their networks. Florida, in fact, refused to accept more IWS/LAN terminals until this issue is resolved. SSA is now working with the DDSs to identify alternatives for providing the states with some degree of management control. Turning to managing the acquisition of information technology resources as an investment, SSA has—consistent with the Clinger-Cohen Act of 1996 and OMB guidance—followed several essential practices with IWS/LAN. This includes assessing costs, benefits, and risks, along with monitoring progress against competing priorities, projected costs, schedules, and resource availability. What SSA has not established, however, are critical practices for measuring IWS/LAN’s contribution toward improving mission performance. While it does have baseline data and measures that could be used to assess the project’s impact on performance, it lacks specific target goals and a process by which overall IWS/LAN impact on program performance can be gauged. Further, while OMB guidelines call for post-implementation evaluations to be completed, SSA does not plan to do this. In a September 1994 report, we noted that SSA had initiated action to identify cost and performance goals for IWS/LAN. SSA identified six categories of performance measures that could be used to track the impact of IWS/LAN technology on service delivery goals, and had planned to establish target productivity gains for each measure upon award of the IWS/LAN contract. At the conclusion of our review, however, SSA had not established targeted goals or a process for using performance measures to assess IWS/LAN’s impact on agency productivity improvements. According to officials, the agency has no plans to use these measures in this way because it believes the results of earlier pilots sufficiently demonstrated that savings will be achieved with each IWS/LAN installation, and because the measures had been developed in response to a General Services Administration (GSA) procurement requirement. Since GSA no longer performs this role, SSA sees these actions as no longer necessary. Yet without specific goals, processes, and performance measurements, it will be difficult to assess whether IWS/LAN improves service to the public. Further, the Clinger-Cohen Act requires agencies to develop performance measures to assess how well information technology supports their programs. Knowing how well such technology improvements are actually working will be critical, given the expected jump in SSA’s workload into the next century. The number of disability beneficiaries alone is expected to increase substantially between calendar years 1997 and 2005—from an estimated 6.2 million to over 9.6 million. Concurrent with phase I installation is development of the first major programmatic software application—the Reengineered Disability System (RDS)—to be installed on the IWS/LAN infrastructure. It is intended to support SSA disability claims processing under a new client/server environment. Pilot testing of RDS software to evaluate actual costs and benefits of the system and identify IWS/LAN phase II equipment needs began last August. However, performance and technical problems encountered during the RDS pilot have resulted in a planned 9-month delay—to July 1998—in implementing the pilot system in the first state, Virginia. This will likely cause corresponding delays in SSA’s schedule for acquiring and implementing IWS/LAN phase II equipment, and further delays in national implementation of RDS. How software is developed is another critical consideration; whether the modernized processes will function as intended and achieve the desired gains in productivity will depend in large measure on the quality of the software. Yet software development is widely seen as one of the riskiest areas of systems development. SSA has recognized weaknesses in its own capability to develop software, and is improving its processes and methods. This comes at a critical time, since the agency is beginning development of its new generation of software to operate on the IWS/LAN to support the redesigned work processes of a client/server environment. Significant actions that SSA has initiated include (1) launching a formal software process improvement program, (2) acquiring assistance from a nationally recognized research and development center in assessing its strengths and weaknesses and in assisting with improvement, and (3) establishing management groups to oversee software process improvement activities. Key elements of the software improvement program, however, are still lacking—elements without which progress and success cannot be measured. These are: specific, quantifiable goals, and baseline data to use in assessing whether those goals have been attained. Until such features are available, SSA will lack assurance that its improvement efforts will result in the consistent and cost-effective production of high-quality software. Our report recommends that as part of its recently initiated pilot projects, SSA develop and implement plans that articulate a strategy and time frames for developing baseline data, identifying specific goals, and monitoring progress toward achieving those goals. We are encouraged by SSA’s response, which included agreement and a description of steps it had begun to carry out these recommendations. For over 10 years, SSA has been providing, on request, a Personal Earnings and Benefit Estimate Statement (PEBES). The statement includes a yearly record of earnings, estimates of Social Security taxes paid, and various benefits estimates. Beginning in fiscal year 1995, such statements were sent annually to all eligible U.S. workers aged 60 and over; beginning October 1, 1999, the statements are to be sent to all eligible workers 25 and over—an estimated 123 million people. The public has generally found these to be useful in financial planning. In an effort to provide “world-class service” and be as responsive as possible to the public, SSA in March 1997 initiated on-line dissemination of PEBES to individuals via the Internet. The agency felt that using the Internet in this way would ensure that client data would be safeguarded and confidentiality preserved. Within a month, however, press reports of privacy concerns circulated, sparking widespread fear that the privacy of this information could not be guaranteed. SSA plans many initiatives using the Internet to provide electronic service delivery to its clients. As such, our testimony of last May before the Subcommittee on Social Security focused on Internet information security in general, describing its risks and approaches to making it more secure. The relative insecurity of the Internet makes its use as a vehicle for transmitting sensitive information—such as Social Security information—a decision requiring careful consideration. It is a question of balancing greater convenience against increased risk—not only that information would be divulged to those who should not have access to it, but also that the database itself could be compromised. For most organizations, a prudent approach to information security is three-pronged, including the ability to protect against security breaches at an appropriate level, detect successful breaches, and react quickly in order to track and prosecute offenders. The Internet security issue remains a daunting one, and SSA—like other federal agencies—will have to rely on commercial solutions and expert opinion; this is, however, an area in which there is no clear consensus. Shortly before our May testimony, the Acting Commissioner suspended on-line PEBES availability, promising a reexamination of the service that would include public forums around the country. After analyzing the results of those forums, the Acting Commissioner announced last September that a modified version of the on-line PEBES system would be available by the end of 1997. The new Commissioner, however, has placed implementation of the new system on hold. SSA has hired a private contractor to assess the risk of the modified system; we see this as an important, welcome step in determining the vulnerabilities involved in the use of the Internet. In summary, it is clear that SSA has made progress in dealing with its information technology challenges; it is equally clear, however, that such challenges will continue to face the agency, especially as it transitions to a new processing environment while concurrently dealing with the coming change of century. As a prime face of the government to virtually every American citizen, the stakes in how well the agency meets these continuing challenges are high. This concludes my statement. I would be happy to respond to any questions that you or other members of the Subcommittees may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the information technology challenges facing the Social Security Administration and its recently appointed commissioner. GAO noted that: (1) SSA made significant early progress in assessing and renovating mission-critical mainframe systems--those necessary to prevent the disruption of benefits--and has been a leader among federal agencies; (2) yet as GAO's report of last October indicated, three key risks remained, mainly stemming from the large degree to which SSA interfaces with other entities in the sharing of information; (3) one major risk concerned year 2000 compliance of the 54 state Disability Determination Services (DDS) that provide vital support to the agency in administering SSA's disability programs; (4) the second major risk concerned data exchanges, ensuring that information obtained from outside sources--such as other federal agencies, state agencies, and private businesses--was not corrupted by data being passed from systems that were not year 2000 compliant; (5) SSA exchanges data with thousands of such sources; (6) third, such risks were compounded by the lack of contingency plans to ensure business continuity in the event of systems failure; (7) the resources that SSA plans to invest in acquiring Intelligent Workstation/Local Area Network (IWS/LAN) are enormous; (8) over 7 years the agency plans to spend about $1 billion during phase I to replace its present computer terminals with intelligent workstations and local area networks; (9) as of March 1, SSA had completed installation of about 30,000 IWSs and 800 LANs, generally meeting or exceeding its phase I schedule; (10) GAO has not identified any significant problems in SSA's installation of IWS/LAN equipment at its field offices to date, and the agency has taken steps to minimize adverse impact on service to the public while installation takes place; (11) at the conclusion of GAO's review, however, SSA had not established targeted goals or a process or using performance measures to asses IWS/LAN's impact in agency productivity improvements; (12) SSA has recognized weaknesses in its own capability to develop software, and is improving its processes and methods; and (13) SSA plans many initiatives using the Internet to provide electronic service delivery to its clients.
You are an expert at summarizing long articles. Proceed to summarize the following text: The CDFI Fund provides certification to CDFIs that meet the six statutory and regulatory criteria of the Fund.on CDFIs that have the primary mission of providing capital and development services to economically distressed communities generally underserved by conventional financial institutions. CDFIs provide products and services (such as mortgage financing for low-income and first-time homebuyers and financing for not-for-profit affordable housing developers) that otherwise may not be accessible in these communities. CDFIs can be for-profit or nonprofit institutions and can be funded by private and public sources. Depository CDFIs such as community development banks and credit unions obtain capital from customers and nonmember depositors. Depository and nondepository CDFIs may obtain funding from conventional financial institutions, such as banks, in the form of loans. In addition, both types of CDFIs may receive funding from corporations, individuals, religious institutions, and private foundations. Finally, CDFIs may apply for federal grants and participate in federal loan programs. For example, Treasury’s CDFI Fund makes grants, equity investments, loans, and deposits to help CDFIs serve low-income people CDFI Fund certification is conferred and communities.administered by the Department of Agriculture and the Small Business Administration. As of December 31, 2014, there were a total of 933 certified CDFIs (411 depository and 522 nondepository). Other federal funding sources include loan programs The 12 FHLBanks are regionally based cooperative institutions owned by member financial institutions (see fig. 1). regional FHLBank, a financial institution (such as a nondepository CDFI) must meet certain eligibility requirements and purchase capital stock; thereafter, it must maintain an investment in the capital stock of the FHLBank sufficient to satisfy the minimum investment required for that institution in accordance with the FHLBank’s capital plan. On February 27, 2015, the FHLBank of Des Moines and the FHLBank of Seattle announced that the members of both FHLBanks had ratified an agreement approved by their boards of directors in September 2014 to merge. The FHLBanks anticipate that the merger will be effective by the middle of 2015. Single-family mortgage loans are loans for 1–4 unit properties. risk-management policies of the FHLBank.haircuts based on factors such as risks associated with the member’s creditworthiness, the type of collateral being pledged, and illiquidity of the collateral. The differences among nondepository CDFIs and other FHLBank members range from the degree to which they focus on community development to differences in size and supervision. Two member types— nondepository and depository CDFIs—share a primary community development focus. As noted previously, both types of CDFIs must have a primary mission of promoting community development to be certified by the CDFI Fund. CDFIs serve as intermediary financial institutions that promote economic growth and stability in low- and moderate-income communities. Frequently, CDFIs serve communities that are underserved by conventional financial institutions and may offer products and services that generally are not available from conventional financial institutions. Such products and services include mortgage financing for low-income and first-time homebuyers; homeowner or homebuyer counseling; financing for not-for-profit affordable housing developers; flexible underwriting and risk capital for needed community facilities; financial literacy training; technical assistance; and commercial loans and investments to assist start-up businesses in low-income areas. Although other FHLBank members may provide similar services to similar populations, community development may not be their primary mission. Nondepository CDFIs are smaller in asset size than most depository institution and insurance company FHLBank members. As of December 31, 2014, active members of the FHLBank System had approximately $20 trillion in assets. As shown in table 1, as of the same date, median assets for nondepository CDFI members (approximately $43 million) were lower than median assets for both depository members (approximately $207 million) and insurance company members (approximately $975 million). The largest nondepository CDFI had about $708 million in assets, while the largest insurance company member had assets of about $393 billion and the largest depository member had assets of about $2 trillion. In addition, the 30 nondepository CDFI members altogether accounted for about .01 percent of the total assets of all active FHLBank members, whereas depository and insurance company members held about 77 percent and about 23 percent of FHLBanks assets, respectively. In addition, nondepository CDFIs are not supervised by a prudential federal or state regulator unlike other FHLBank members. Depository FHLBank members are regulated and supervised by federal and state agencies that have responsibility for helping ensure the safety and soundness of the financial institutions they oversee, promoting stability in the financial markets, and enforcing compliance with applicable consumer To achieve these goals, regulators establish capital protection laws.requirements for banks and conduct on-site examinations and off-site monitoring that assesses their financial condition, including assessing their compliance with applicable laws, regulations, and agency guidance. The insured depository institutions also must submit to their regulators quarterly financial information commonly known as Call Reports that follow generally accepted accounting principles (GAAP). Insurance companies are regulated primarily by state insurance commissioners and are subject to examination. While the CDFI Fund’s review standards are not equivalent to the examination standards applicable to regulated depository institutions, the Fund requires a nondepository CDFI to submit its most recent year-to-date financial statements prepared in conformity with GAAP for certification and funding eligibility. The CDFI Fund also requires nonprofit and for-profit nondepository CDFIs receiving awards to annually submit financial statements—including information on financial position, operations, activities, and cash flows—that have been audited by an independent certified public accountant. However, only a subset of CDFIs receives CDFI Fund awards and is subject to such reporting. In addition to financial statements of individual nondepository CDFIs, other sources can provide information on the financial performance of nondepository CDFIs overall or individually. For example, the CDFI Fund reports on its analysis of financial data from nondepository CDFIs. The CDFI Snapshot Analysis for fiscal year 2012 (the most recent available at the time of our review) notes that community development loan funds, one type of nondepository CDFI, had rates of loan loss (loans that may prove uncollectible) of 1 percent, which compared favorably with depository CDFIs and mainstream financial institutions. A national network of CDFIs reported that its members’ annual net charge-off rate (debts an entity is unlikely to collect) was the same as for all FDIC- insured institutions in fiscal year 2012. It also noted that its members had provided more than $33 billion in cumulative financing for community development activities from their inception through the end of fiscal year 2012. This financing, the network reported, helped to create or maintain nearly 600,000 jobs, support the development or rehabilitation of more than 960,000 housing units, and start or expand nearly 94,000 businesses and microenterprises. And, for a fee, a community development loan fund can be assessed by an independent third party, and receive a financial strength and performance rating.rates a CDFI using a methodology similar to that used by banking regulators. In the case of financial failure, nondepository CDFIs and depository members also undergo different processes for liquidating assets to repay the FHLBanks for any advances. Depository members, including depository CDFIs, are insured by FDIC or NCUA, which means that FDIC or NCUA would serve as the receiver in the event of failure. In a typical bank or thrift failure, FDIC, acting as receiver, is responsible for outstanding advances of the failed institution. FDIC will facilitate a purchase and assumption transaction with another financial institution or sell the failed institution’s assets, including collateral that had been pledged to secure the advances, to mitigate losses to FDIC’s Deposit Insurance Fund.state insured, according to FHFA, the FHLBanks likely would go through the federal bankruptcy process to settle claims should a nondepository CDFI with FHLBank advances fail. Collateral requirements (which must be met to obtain advances) rather than the membership requirements themselves can discourage nondepository CDFIs from seeking FHLBank membership. Because regulations allow the FHLBanks to set their own thresholds for meeting some membership requirements, the requirements varied. The rates of nondepository CDFI membership also varied by FHLBank and were low. The FHLBanks generally impose collateral requirements on nondepository CDFIs that are comparable to those imposed on depository members categorized as higher risk and in some cases, comparable to those imposed on insurance companies. Officials from the nondepository CDFIs we interviewed generally cited steep haircuts (discounts) and the availability of eligible collateral as the primary challenges to obtaining advances; in addition, some viewed the requirements as a disincentive to seeking membership (because advances are a primary benefit of membership). While nondepository CDFIs must meet seven standards for FHLBank membership, the thresholds the FHLBanks set for meeting certain of the requirements varied. The Federal Home Loan Bank Act and FHFA’s regulations establish the membership requirements for nondepository CDFIs. Nondepository CDFIs must be duly organized under tribal law, or the laws of any state or the United States. be certified by the CDFI Fund. make long-term home mortgage loans, which are defined by statute to include loans secured by first liens on residential real property. Under FHFA regulations, institutions satisfy this requirement if they originate or purchase long-term first mortgage loans on single-family or multifamily residential property, or certain farm or business property that also includes a residence, or purchase mortgage pass-through securities representing an undivided ownership in such loans. By regulation, FHFA has defined “long-term” loans to include those with an original term to maturity of 5 years or more. be in a financial condition that would allow advances to be safely made to it. FHFA developed four financial condition standards for the FHLBanks to use in their assessments—a net asset ratio of at least 20 percent; positive average net income over the preceding 3 years; a ratio of loan loss reserves to loans and leases 90 days or more delinquent of at least 30 percent; and an operating liquidity ratio of at least 1.0 for the 4 most recent quarters, and for 1 or both of the 2 preceding years. If the nondepository CDFI met the standards, it would be presumed to be financially sound, and satisfy the requirement. If the CDFI did not meet one or more standards, the CDFI may offer a rebuttal and the FHLBank would perform a separate analysis to determine if the CDFI was financially sound. have management whose character is consistent with sound and economical home financing. Under FHFA’s regulations, an applicant meets this requirement if it certifies to the FHLBank that neither the CDFI nor its senior officials have been the subject of any criminal, civil, or administrative proceedings reflecting upon creditworthiness, business judgment, or moral turpitude in the past 3 years and that there are no known potential criminal, civil, or administrative monetary liabilities, lawsuits, or unsatisfied judgments arising within the past 3 years that are significant to the applicant’s operations. have a home financing policy that is consistent with sound and economical home financing. Under FHFA regulations, applicants meet this requirement if they provide a written justification, acceptable to the FHLBank, explaining how and why their home financing policy is consistent with the FHLBank System’s housing finance mission. have mortgage-related assets that reflect a commitment to housing finance. They are not required to meet the statutory requirement that applies to certain insured depository institutions to hold at least 10 percent of their assets in residential mortgage loans to be eligible for FHLBank membership. In addition, the FHLBanks also must require all new members to purchase capital stock. The FHLBanks have discretion in developing rules to assess compliance with some of the listed requirements. For example, the FHLBanks can set thresholds (such as dollar amounts or percentages) to satisfy requirements for which FHFA has not set thresholds—such as the requirement for making long-term home mortgage loans and the requirement to hold mortgage-related assets.develop its own requirement for membership stock purchases, subject to FHFA approval. We reviewed the three requirements for which the FHLBanks have discretion in making rules and found that the requirements varied across the FHLBanks. Making long-term mortgages. Eight of the 12 FHLBanks we reviewed had not developed a threshold for nondepository CDFIs to satisfy the long-term mortgage requirement, while four had specified a dollar amount or percentage of assets in long-term mortgage loans. FHFA expects that in assessing the applicant, the FHLBanks will assess the extent to which nondepository CDFIs have a commitment to housing finance requirements in light of their unique mission and community development orientation. The four FHLBanks that had quantitative minimums had minimum requirements that ranged from $1,000 to $1 million in dollar amounts, and from 1 percent to 2 percent of total assets. One FHLBank’s stated policy included an exemption from its particular minimum requirement for nondepository CDFIs that plan to incorporate long-term mortgage loans into future business strategies. Another FHLBank that had a dollar minimum recently gave a nondepository CDFI an exemption from the minimum requirement based on the assessment that the CDFI had significant commitment to housing in accordance with regulatory and membership requirements. For the remaining eight FHLBanks that did not set a minimum requirement, nondepository CDFIs can satisfy the long- term mortgage requirement by documenting that they have originated or purchased more than one such loan or qualifying mortgage investment. Mortgage-related assets. Four of the 12 FHLBanks we reviewed did not have minimum requirements for the mortgage-related asset requirement, 5 had quantitative and qualitative measures (such as an assessment of the CDFI’s housing-related activities and mission), and 3 had only quantitative measures. The highest minimum quantitative requirement for mortgage-related assets as a percentage of total assets was 10 percent. The three FHLBanks with only quantitative requirements had the lowest requirements, with one FHLBank requiring two mortgage-related assets, another requiring $1,000 in mortgage-related assets, and another requiring the lower of 1 percent of total assets or $10 million in mortgage- related assets. Stock purchases. The amount of stock that members must purchase varied according to each FHLBank’s funding strategy (see table 2). FHLBank members must hold a certain amount of membership capital stock as a continuing condition of membership. Each FHLBank determines as a part of its capital plan the amounts that all members must purchase in membership capital stock and sets its requirement based on the FHLBank’s business model. Five of the 12 FHLBanks we reviewed calculated the membership stock purchase as a percentage of the member’s total assets. The other 7 FHLBanks calculated the purchase as a percentage of a specific asset category, such as mortgage-related assets or certain assets eligible to be pledged as collateral. The FHLBanks also require members to purchase activity-based stock. That is, members must acquire a specific amount of stock based on the product—such as advances or letters of credit—the FHLBank provided to that member. The purchases are specified as a percentage of the dollar amount of each transaction the member conducted with the FHLBank. For example, among the 12 FHLBanks, the purchase requirements on advances ranged from 2 percent to 5 percent. For instance, if a member had a $2 million advance transaction with the FHLBank, it would have to purchase from $40,000 to $100,000 in capital stock. While FHLBank and CDFI industry officials we interviewed cited several membership requirements that could pose a challenge for nondepository CDFI applicants (including financial condition, long-term home mortgage loan, mortgage-related assets, and stock purchase requirements), most of the nondepository CDFIs we interviewed were able to meet these requirements or stated that they would be able to meet the requirements. Financial condition requirements. Officials we interviewed from 9 of the 12 nonmember nondepository CDFIs stated that they would be able to meet the financial condition standards, while 2 stated that they would potentially face challenges with the financial condition standards. In addition to interviewing officials from nondepository CDFIs that were nonmembers, we reviewed the applications of the 27 nondepository CDFIs that were members as of September 2014. Seven of the 27 nondepository CDFIs did not meet at least one of the financial condition standards at the time of their application, but made successful rebuttals and became members. Making long-term mortgages. Of the 12 nonmember nondepository CDFIs we interviewed, officials from 1 cited the “makes long-term home mortgage loans” requirement as a challenge for membership. In addition, officials from 1 of the 10 member nondepository CDFI we interviewed cited this as a challenge, but noted that they received an exemption from the minimum quantitative requirement imposed by the FHLBank. The officials from the remaining 11 nonmember and 9 member CDFIs did not identify this requirement as a challenge. Officials from two FHLBanks stated that CDFIs in general may face challenges meeting this requirement, as some nondepository CDFIs may not make or hold long- term home mortgage loans if they are not involved in mortgage lending. Mortgage-related assets. Although the mortgage-related asset requirement varies among the FHLBanks, none of the officials from the 12 nonmember nondepository CDFIs we interviewed stated that they would face challenges meeting this requirement. Stock-purchase requirements. Officials from 1 of the 12 nonmember CDFIs we interviewed stated that the amount of membership stock they would be required to purchase was cost prohibitive, while officials from the 10 member CDFIs we interviewed stated that the amount required was not a challenge to membership. Nondepository member CDFIs we interviewed were able to purchase the required amount of membership stock. Officials from one nonmember nondepository CDFI in the FHLBank-Chicago district said that the CDFI was approved for membership, but did not become a member because the stock purchase requirement was too high. FHLBank-Pittsburgh recently amended its capital plan by lowering the membership and activity-based stock purchase calculations, citing benefits to CDFIs. In addition, FHLBank- Chicago recently reduced its minimum membership stock purchase requirement to make it less costly for nondepository CDFIs and others to join. (We discuss these and other changes later in this report.) The rates of nondepository CDFI membership generally were low, ranging from 2.08 percent to 15.38 percent of nondepository CDFIs in each FHLBank district (see fig. 2). As of December 31, 2014, 30 of the 522 nondepository CDFIs were FHLBank members, and 6 of the 12 FHLBanks had membership rates of less than 5 percent for the nondepository CDFIs in their districts. The number of nondepository CDFI members has increased every year since the first joined in 2010. Forty percent (12 of 30) of the current nondepository CDFI members joined the FHLBank System in 2014. As of the end of 2014, all 12 FHLBanks had at least one nondepository CDFI member; 2 approved their first nondepository CDFI member in 2013 and another 3 did so in 2014. According to FHFA officials, some nondepository CDFIs may not be good candidates for FHLBank membership. They noted that the majority of nondepository CDFIs make nonhousing loans such as microloans, small business loans, and commercial loans. In addition, FHFA officials stated that many of the nondepository CDFIs engaged in housing-related activities have low asset volumes. Due to the differences between nondepository CDFIs and other FHLBank members discussed earlier, representatives from the FHLBanks stated that nondepository CDFIs have certain risks that depository members do not have. The risks cited included the lack of supervision by a regulator and uncertainty related to the liquidation process in the event of insolvency. As noted previously, the FHLBanks are required by statute and FHFA regulations to develop and implement collateral standards and other policies to mitigate the risk of default on outstanding advances. To address risks associated with nondepository CDFIs, the FHLBanks can place limits on eligible collateral and generally impose collateral requirements on nondepository CDFIs seeking advances that are comparable to those imposed on depository members categorized as higher risk and, in some cases, insurance companies. Some of the CDFIs and FHLBanks we interviewed cited these collateral requirements as a disincentive for nondepository CDFI membership. Although they are allowed by regulation to accept certain types of collateral from all of their members, some FHLBanks have chosen to limit the types of eligible collateral that nondepository CDFIs can pledge. (This is also sometimes the case for other nondepository members such as insurance companies.) FHLBanks can accept FHLBank deposits as collateral. The securities collateral FHLBanks can accept includes U.S. Treasury and agency securities, U.S. agency mortgage-backed securities, and privately issued mortgage-backed securities (including residential and commercial). The types of mortgage collateral that FHLBanks can accept include single-family and multifamily mortgage loans; mortgage or other loans issued, insured, or guaranteed by the U.S. government or its agencies; commercial real estate loans; and home equity loans or lines of credit. Nondepository CDFIs are eligible to pledge FHLBank deposits, securities, and mortgage loans as collateral for advances at all 12 FHLBanks. During the course of our work, three FHLBanks—Atlanta, New York, and Pittsburgh—changed their policies to allow mortgage loans as eligible collateral from nondepository CDFIs. Pittsburgh changed its policies in August 2014, New York in September 2014, and Atlanta in December 2014. All the other FHLBanks have had policies that allowed mortgage loans as eligible collateral from nondepository CDFIs since nondepository CDFIs became eligible for membership in 2010. Officials from FHLBanks in Atlanta, New York, and Pittsburgh stated that due to the different risks posed by nondepository CDFIs, they initially took conservative stances on accepting loan collateral. The risks they cited included the lack of a clear resolution mechanism in the case of bankruptcy and the FHLBank not being able to obtain blanket liens on pledged collateral. Within the general collateral categories (such as securities and mortgage loans), each FHLBank can impose specific collateral eligibility requirements, such as the quality of the collateral. For example, for nondepository CDFIs, one FHLBank disallows nonagency mortgage- backed securities, another FHLBank disallows commercial real estate collateral, and five FHLBanks disallow home equity lines of credit or home equity loans. At two FHLBanks, nondepository CDFIs can pledge mortgage loan collateral only if the CDFIs have certain credit ratings. The collateral requirements—specifically, the pledge method and haircuts—applicable to nondepository CDFIs seeking advances are comparable to those generally imposed on depository members categorized as higher risk and, in some cases, to those imposed on insurance companies. Based on our review of each FHLBank’s policies, all FHLBanks evaluate the creditworthiness and financial condition of their members, including nondepository CDFIs. Factors included in many of the evaluations are capital adequacy, asset quality, management quality, earnings, and liquidity. Additionally, the FHLBanks (with the exception of Topeka) assign credit ratings to their depository members that indicate Of the the creditworthiness and financial condition of these members.11 FHLBanks that assign credit ratings to depository members, 9 also assign credit ratings to nondepository CDFIs, with 2 (Atlanta and San Francisco) using a separate rating system specific to nondepository CDFIs. The remaining FHLBanks (New York and Indianapolis) do not assign credit ratings to nondepository CDFIs. While the metrics and methodology used to evaluate members differ, policies across FHLBanks generally reflect differential treatment between depository institutions and nondepository CDFIs (and other nondepository institutions such as insurance companies). For example, all FHLBanks require nondepository members to deliver collateral but generally only depository members with low credit ratings are required to list or deliver collateral. The FHLBanks differed in the extent to which they varied haircuts (discounts) for nondepository CDFIs and depository institutions. For securities collateral, eight FHLBanks imposed the same haircut on nondepository CDFIs as on depository members for all eligible types of securities collateral. In contrast, four imposed higher haircut ranges on nondepository CDFIs. For loan collateral, six FHLBanks generally applied the same haircuts to nondepository CDFIs and depository institutions. One applied a higher-range haircut for single-family mortgages to nondepository CDFIs than to depository institutions; five FHLBanks applied higher haircut ranges to nondepository CDFIs than to depository institutions; and another FHLBank applied the lower end of the haircut range to nondepository CDFIs. FHLBanks generally varied the haircut based on the types and quality of collateral, credit score or financial condition of the member, and pledge method (for loans). In general, haircuts were higher for collateral with lower ratings or of lower quality. See tables 3 and 4 for the specific haircuts each FHLBank imposed on nondepository CDFIs and depository In all cases, each FHLBank institutions for securities and loan collateral.may change these requirements at its discretion. See appendix II for more information on each FHLBank’s credit rating system and collateral requirements for advances, and how they may differ for nondepository CDFIs and depository institutions. Four FHLBanks—Des Moines, New York, Pittsburgh, and San Francisco—had conditions on advance terms and borrowing limits specific to nondepository CDFIs. In general, advance terms and conditions varied widely. For example, FHLBanks offered advances with terms to maturity ranging from overnight to 30 years. FHLBanks may establish an overall credit limit for their borrowers. For example, the overall credit limit for FHLBank-Chicago was 35 percent of a member’s total assets. However, the amount a borrower can obtain is also partly dependent upon the amount and value of qualifying collateral available to secure the advance. FHLBanks may impose additional restrictions depending on the financial condition of the borrower, such as restrictions on the type of product, term of advance, and amount of credit available. Examples of specific conditions imposed on nondepository CDFIs by the four FHLBanks include the following: FHLBank-Des Moines imposed a maximum amount of borrowing capacity and term available based on member credit ratings. Nondepository CDFIs were subject to a lower borrowing capacity than depository institutions with the same ratings. FHLBank-New York limited the maximum advance term to 5 years for nondepository CDFIs. FHLBank-Pittsburgh limited the maximum advance term to 2 years for nondepository CDFIs. FHLBank-San Francisco had a term limit of 7 years for its nondepository CDFIs. For more information on each FHLBank’s advance terms and borrowing limits for nondepository CDFIs and depository institutions, see appendix III. Officials from most of the nondepository CDFIs we interviewed cited access to low interest-rate advances from the FHLBanks as the primary benefit of membership, and some FHLBanks and nondepository CDFIs officials cited collateral requirements as challenges or disincentives to obtaining advances. Officials from three FHLBanks stated that the lack of eligible collateral was a disincentive for nondepository CDFIs seeking membership. Officials from 21 (10 members and 11 nonmembers) of the 22 nondepository CDFIs we interviewed cited access to low interest-rate advances from the FHLBanks as the primary benefit of membership. Officials from 5 of the 12 nonmember nondepository CDFIs interviewed said that they would not be interested in membership if they could not obtain advances. Officials from 10 FHLBanks and 12 (6 members and 6 nonmembers) nondepository CDFIs stated that lack of eligible collateral was a challenge to obtaining advances for nondepository CDFIs. The reasons the officials provided for lack of collateral eligibility included not possessing mortgage-related collateral, not having unencumbered assets (those free and clear of liens or claims by other creditors), and not having quality collateral that met FHLBank standards. For example, officials from FHLBank-Chicago stated that most nondepository CDFIs possessed assets, such as small business loans, that did not qualify based on statute and regulation as eligible collateral. Officials from four FHLBanks and seven nondepository CDFIs (three members and four nonmembers) stated that the requirement to pledge unencumbered assets was a challenge for nondepository CDFIs. Collateral encumbrance may occur when a CDFI is also a loan consortium that makes loans to borrowers on behalf of its members. Quality of collateral also affected collateral eligibility. For instance, officials from FHLBank-Cincinnati provided an example of a nondepository CDFI member whose collateral consisted exclusively of subprime mortgage loans. Due to the FHLBank’s constraints on exposure to subprime residential mortgage loan collateral (no more than 60 percent of borrowing capacity could stem from these loan types), the FHLBank was not able to accept the loans as collateral. Steep haircuts were cited as a disincentive to applying for advances. Officials from 6 (2 members and 4 nonmembers) of the 22 nondepository CDFIs we interviewed cited high haircuts as a disincentive for obtaining advances. For example, officials from a nondepository CDFI member said that their haircuts were very steep and that they likely will not obtain advances again unless the FHLBank eased the requirements. Officials from a nonmember nondepository CDFI in another district stated that the haircut was too restrictive. Officials from all the member nondepository CDFIs we interviewed said that FHLBank membership had not affected their business activities or that they had not considered changing their business activities to better meet the collateral requirements. However, officials from three of the nonmember nondepository CDFIs we interviewed said that they have been taking actions to obtain assets that could be used as eligible collateral. One of these nonmember nondepository CDFIs was buying mortgage-backed securities to better meet collateral requirements. Additionally, officials from five FHLBanks said that their nondepository CDFI members had changed the structure of certain loans or repositioned their assets to create eligible collateral for advances. From October 2010 to September 2014, less than half of the nondepository CDFI members obtained advances from the FHLBanks. Six FHLBanks provided 115 advances totaling about $306.7 million to 12 nondepository CDFIs during this period (see fig. 3). However, two FHLBanks provided 57 advances to four nondepository CDFIs that accounted for almost 98 percent of the total advance amount. Of the 115 advances, approximately 36.5 percent had terms of less than 1 year (including advances with overnight terms), 15.7 percent had terms of more than 1 year to less than 5 years, 44.3 percent had terms of 5 years or longer, and 3.5 percent had open terms. FHFA and FHLBanks have made efforts to broaden the participation of nondepository CDFIs in the FHLBank System. According to FHFA officials, FHFA’s final rule implementing the HERA provisions that allow nondepository CDFI membership in the FHLBank System allows for certain flexibilities in meeting membership requirements. FHFA oversight of FHLBanks did not focus on FHLBanks’ membership approval process or advance and collateral practices as it relates to nondepository CDFIs and did not identify any safety and soundness concerns or action plans. FHFA and the FHLBanks have undertaken several efforts to help promote membership of nondepository CDFIs in the FHLBank System. As noted previously, FHFA’s final rule to implement HERA provisions on nondepository CDFI membership in the FHLBank System allows for certain flexibilities in meeting membership requirements. In 2009, FHFA drafted a proposed rule that sought to amend the membership regulations and issued it for public comment. The substantive issues raised in the comments on membership focused on the criteria that FHFA proposed for FHLBanks to use in evaluating the financial condition of nondepository CDFIs applying for membership. According to FHFA officials, the CDFI community also was concerned about nondepository CDFIs not meeting basic membership requirements, such as making long-term mortgage loans and carrying mortgage-related assets. FHFA reviewed the comments and issued a final rule in January 2010. If an applicant cannot meet the presumptive financial conditions, the final FHFA regulations allow nondepository CDFIs to submit additional information demonstrating that the applicant is in sufficiently sound condition to obtain membership and advances. The final rule also did not extend the requirement to demonstrate that 10 percent of their total assets are in residential mortgage loans to nondepository CDFI applicants. FHFA oversight of FHLBanks as it relates to nondepository CDFIs did not focus on membership processes due to the low risk posed, and its oversight of collateral practices did not identify areas of concern. FHFA conducts annual examinations of the FHLBanks that cover these topics, among others. According to FHFA officials, FHFA examines FHLBanks’ membership approval processes to ensure that they comply with FHFA’s eligibility requirements and implement a risk-management process that is intended to mitigate the FHLBanks’ exposure to significant risks, especially legal, credit, and operational risk. FHFA reviewed aspects of each FHLBank’s membership process periodically in 2010 through 2013. However, according to FHFA, it did not focus on processes specific to nondepository CDFIs because nondepository CDFIs pose low safety and soundness and credit risks, in aggregate, to FHLBanks due to their low rates of membership and advances. According to FHFA officials, FHFA currently reviews each nondepository CDFI’s application for membership and has not objected to any nondepository CDFI application submitted by the FHLBanks. It primarily reviews applications to gather information about the FHLBanks’ membership approval process. In annual examinations of each FHLBank in 2010 through 2013, FHFA reviewed the FHLBanks’ collateral and advance practices for nondepository CDFIs and did not find any safety and soundness issues. FHFA’s advances and collateral examination manual calls for it to evaluate the FHLBanks’ procedures for analyzing and monitoring members, including nondepository CDFIs, and their outstanding advances. The manual also advises that special attention be given to FHLBanks’ collateral practices for CDFIs because nondepository CDFIs have no dedicated regulator. Furthermore, FHFA advises that FHLBanks’ credit risk-management procedures be tailored to address risks unique to each member type. For example, FHLBanks should consider that nondepository CDFIs likely are covered by federal bankruptcy statutes and not by the same receivership laws as insured depository institutions. FHFA and the FHLBanks have undertaken several efforts to help educate nondepository CDFIs about and promote membership in the FHLBank System. According to FHFA officials, FHFA conducted a training session and webinar on the membership rule in February 2009, followed up on questions from CDFIs about the regulations, and tracked the progress of nondepository CDFIs in gaining membership. Officials from FHFA have made themselves available for questions about and problem solving in relation to the rules. According to FHFA and FHLBank officials as well as nondepository CDFIs we interviewed, FHFA has been encouraging FHLBanks to discuss ways in which they could increase nondepository CDFI membership and access to advances in a safe and sound manner. For example, at a speech to the FHLBank boards and executive management in early 2014, FHFA encouraged all the FHLBanks to meet collectively to discuss collateral practices that might facilitate advance activity with nondepository CDFIs, and emphasized the importance of the FHLBanks’ understanding of CDFI business models and funding needs. According to FHFA officials, as a result of that speech, the FHLBanks held a conference in August 2014 with the nondepository CDFI community to discuss facilitating membership and better understand the business of nondepository CDFIs. As a follow-up to the conference, FHLBank credit officers held nondepository CDFI credit review training in October 2014. Furthermore, the FHFA Director also met with nondepository CDFI officials and trade groups in July 2014. In addition, all FHLBanks performed their own outreach to the nondepository CDFI community. For example, all the FHLBanks met with FHFA and nondepository CDFI members and nonmembers at the August 2014 conference to better understand nondepository CDFIs. Ten of the FHLBanks we interviewed have initiated discussions with and solicited membership applications from nondepository CDFIs since the conference. Some FHLBanks made changes in response to feedback from nondepository CDFI members. As noted previously, three of the FHLBanks that had restrictive collateral eligibility requirements amended these requirements to make obtaining advances easier for nondepository CDFIs. Two FHLBanks also made changes to their capital stock purchase requirements to allow a nondepository CDFI to be able to meet the stock purchase amount. According to the FHLBank officials, FHFA has been supportive of the changes they made to better accommodate nondepository CDFI membership and access to advances. FHFA officials told us that they have continued to encourage the FHLBanks to facilitate broader nondepository CDFI membership and access to advances. We provided a draft of this report to FHFA and the 12 FHLBanks for their review and comment. FHFA and four FHLBanks (Chicago, Cincinnati, Indianapolis, and Topeka) provided technical comments, which we incorporated as appropriate. The other eight FHLBanks did not provide any comments. In its comments, FHLBank-Chicago also stated that our report unfairly compares nondepository CDFIs with depository institutions and that a better comparison would be regulated institutions versus nonregulated or less regulated institutions (because claims would be handled similarly for regulated institutions). Specifically, FHLBank-Chicago noted that an FHLBank likely would go through the federal bankruptcy process to settle claims if a nondepository CDFI with FHLBank credit outstanding failed, whereas a federal or state regulator would facilitate the process to settle claims if a regulated institution such as a bank, credit union, or insurance company with FHLBank credit outstanding failed. However, the purposes of our report explicitly include discussing how nondepository CDFIs differ from other members of the FHLBank System (in particular, depository members) and the membership and collateral requirements for these CDFIs. We understand that risks vary by type of institution and noted several differences—including in supervision and the liquidation of assets—between nondepository CDFIs and other types of FHLBank members in our report. Comparing the collateral requirements for nondepository CDFIs with those for depository institutions enabled us to determine how the FHLBanks address the different risks posed by nondepository CDFIs. Moreover, in terms of resolution treatments, there is no uniform approach to settling claims even within the category of “regulated institutions.” For instance, FHFA stated in one of its advisory bulletins that “FHLBanks face risks lending to insurance companies that differ in certain respects with lending to federally-insured depository institutions” and noted that “laws dealing with a failed insured depository institution are well known and uniform across the country, whereas, the laws dealing with the failure of an insurance company are less well known to the FHLBanks and, though similar, may vary somewhat from state to state.” Therefore, we maintain that our comparisons were fair and made no change to the report in response to this comment. In another comment, FHLBank-Chicago stated that the report implies that by loosening collateral requirements (some of which are dictated by law or regulation), more nondepository CDFIs would be eligible or willing to become FHLBank members. It noted that this was not necessarily the case, as a majority of nondepository CDFIs would not qualify for membership because of their lines of business (small business lending, microlending, and commercial lending) and because they have encumbered assets. We believe that these points are already adequately addressed in our report. Specifically, in the report we note that the types of eligible collateral are dictated by regulation. In addition, we state in the report that FHFA officials told us that some nondepository CDFIs may not be good candidates for FHLBank membership because the majority of nondepository CDFIs make nonhousing loans such as microloans, small business loans, and commercial loans. Furthermore, we note that several FHLBanks and nondepository CDFIs we interviewed told us that the requirement to pledge unencumbered assets was a challenge for nondepository CDFIs. We undertook these interviews to help understand the level of demand for FHLBank membership and obtain views on any challenges associated with obtaining membership and advances. Therefore, we made no change to the report in response to this comment. In its comments, FHLBank-Indianapolis stated that the report could do a better job of making it clear that (1) FHLBanks accept assets as collateral and develop haircut methodologies to comply with regulations and an expectation of no losses in the event of default and (2) pledging illiquid assets can increase the haircut. In response, we added language in the body of the report that reiterated language in our background section stating that FHLBanks are required by statute and FHFA regulations to develop and implement collateral standards and other policies to mitigate the risk of default on outstanding advances. We also added language to the report noting that the illiquidity of assets can affect haircuts. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and members, the Director of FHFA, the Council of the FHLBanks, and the 12 FHLBanks. This report will also be available at no charge on our website at http://www.gao.gov. Should you or your staff have questions concerning this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. The objectives of this report were to discuss (1) how nondepository community development financial institutions (CDFI) differ from other members of the Federal Home Loan Bank (FHLBank) System, in particular depository members; (2) the membership and collateral requirements for nondepository CDFIs and challenges posed by these requirements; and (3) Federal Housing Finance Agency (FHFA) oversight of FHLBanks in relation to nondepository CDFIs and efforts by FHFA and FHLBanks to increase participation of nondepository CDFIs in the FHLBank System. To describe differences between nondepository CDFIs and other members of the FHLBank System, we reviewed relevant sections of the Housing and Economic Recovery Act of 2008 (HERA) and FHFA’s final rule on nondepository CDFI membership in the FHLBank System. In addition, we reviewed other relevant information from the FHLBanks and CDFI industry, such as reports by the Department of the Treasury’s Community Development Financial Institutions Fund (CDFI Fund) and the Opportunity Finance Network. We determined that these studies were methodologically sound and reliable for our purposes. To compare the asset sizes of different types of FHLBank members (nondepository CDFIs, depository institutions, and insurance companies), we analyzed available data on their assets from FHFA’s membership database as of December 31, 2014. For these institution types, we calculated the distribution of their assets (minimum assets, 25th percentile, median assets, 75th percentile, and maximum assets). To assess the reliability of these data, we reviewed information about the system, interviewed knowledgeable officials, and analyzed the data for logical consistency and completeness. We found that these data were sufficiently reliable for the purpose of comparing the asset sizes of different types of FHLBank members. To address membership and collateral requirements, we reviewed relevant legislation and regulations, such as the Federal Home Loan Bank Act and FHFA’s final rule on nondepository CDFI membership. We also reviewed documentation—such as nondepository CDFI membership applications and available FHLBank guidance on assessing nondepository CDFIs for membership—from each of the FHLBanks to determine membership requirements and identify any differences among FHLBank policies.FHLBank’s requirements for membership and identified differences. For example, in the three areas where FHLBanks had discretion, the analyst determined whether FHLBanks had set a minimum quantitative or qualitative threshold that an applicant needed to meet. A second analyst then verified the accuracy of this information. Nondepository CDFIs are subject to specific financial condition requirements. We requested and received financial data from the CDFI Fund but determined that the dataset did not contain relevant data needed to determine how many nondepository CDFIs could meet these financial condition requirements. Specifically, one GAO analyst reviewed each To determine the number of nondepository CDFIs that were members from calendar years 2010 through 2014, we analyzed data from FHFA’s membership database as of December 31, 2014. To calculate the membership rate (the percentage of nondepository CDFIs in each district that were members), we used (1) data from FHFA’s membership database on the number of members as of December 31, 2014, and (2) data from the CDFI Fund on the total number of nondepository CDFIs as of December 31, 2014. We assessed the reliability of data from both systems by reviewing any relevant documentation, interviewing knowledgeable officials, and analyzing the data for logical consistency and completeness. We determined that the data were sufficiently reliable for the purposes of assessing rates of membership for nondepository CDFIs. To determine each FHLBank’s requirements for obtaining advances and any differences among the FHLBanks, we reviewed relevant documentation such as each FHLBank’s collateral guidelines and product and credit policies. Using these documents, we identified the haircut (discount) for eligible collateral types for depository and nondepository institutions and other collateral requirements, such as the term of advances and collateral pledging methods. Our review of FHLBank documents showed that FHLBanks do not describe their collateral requirements uniformly. Although we took several steps that enabled us to present comparable categories of collateral across the FHLBanks, our analysis did not account for differences in the eligibility criteria for collateral that may be accepted, such as quality of collateral. As a result, the haircuts for different FHLBanks are not comparable. First, we excluded from our analysis the following types of collateral because they were only mentioned in some FHLBanks’ documents: U.S. Treasury separate trading of registered interest and principal securities, agency structured bonds, agency collateralized mortgage obligation accrual bonds, second mortgage-backed securities, student loan asset-backed securities, agricultural real estate loans, land loans, construction loans, student loans, mutual funds, and municipal or state and local securities. Second, because some FHLBanks identified specific haircuts for securities, such as those originating from the Federal Deposit Insurance Corporation, while other FHLBanks listed haircuts for a general category of agency securities, we grouped all the agency securities and provided the range of haircuts. We included in the agency securities category any securities issued or guaranteed by the U.S. government, including those originating from the Federal Deposit Insurance Corporation, National Credit Union Administration, Fannie Mae, Freddie Mac, Ginnie Mae, the Federal Home Loan Banks, and the Small Business Administration. Third, because some FHLBanks identified specific haircuts for specific government-guaranteed loan collateral while others did not, we grouped all government-guaranteed loan collateral together, including loans originating from the Farm Service Agency, Department of Agriculture, Small Business Administration, Federal Housing Administration, and Department of Veterans Affairs. Fourth, because haircuts can vary based on the quality of the collateral pledged, we provided the range of haircuts for each type of collateral accepted by each FHLBank. While we were able to review each FHLBank’s collateral policies and procedures, the confidentiality of such information limited what we could publicly disclose in our report. Specifically, because the collateral haircut policies of the FHLBanks generally are considered proprietary information, we were unable to attribute specific policies to individual FHLBanks. Where appropriate, we used randomly assigned numbers when discussing FHLBank collateral policies to prevent disclosure of FHLBank identities. Additionally, we obtained data from each FHLBank on the amount of advances secured by each nondepository CDFI member from October 2010 to September 2014 (the most recent data available at the time of our request). We assessed the reliability of these data by obtaining information from the six FHLBanks that provided advances to nondepository CDFIs on the system they used to store the data and the procedures in place for recording and ensuring the accuracy of the data. We also reviewed the data for logical consistency and completeness. We determined that the data were sufficiently reliable for reporting the amount of advances obtained by nondepository CDFs. We also interviewed officials from the 12 FHLBanks, 3 trade groups, 10 nondepository CDFIs that were members of the FHLBanks, and 12 nondepository CDFIs that were not members to understand the level of demand for FHLBank membership and obtain views on any challenges associated with membership processes and obtaining advances. To develop the purposive, nonrandom sample of 10 nondepository FHLBank member CDFIs to interview, we selected a nondepository CDFI from each of the 10 FHLBanks that had a nondepository CDFI member as of March 31, 2014 (the most recent data available when we began our work and selected members to interview). In addition to geographic diversity, we sought variation in asset size, financial institution type, and FHLBank advance status. We also selected a purposive, nonrandom sample of 12 nondepository CDFIs that were not members of the FHLBank System, one from each of the 12 FHLBank districts. We selected these 12 from a sample of nondepository CDFIs that were identified during our meetings with member CDFIs and CDFI trade groups as being interested in FHLBank membership. In addition to geographic diversity, we sought variation in asset size when selecting nonmembers to interview. We interviewed officials from all 22 nondepository CDFIs by telephone, focusing on the background of the CDFI and its experience with and opinions of the FHLBank membership and advance processes. The views expressed by the nondepository CDFIs in our sample cannot be generalized to the entire population of nondepository CDFIs. To evaluate FHFA’s oversight, we reviewed relevant laws, legislative history, and regulations (including its final rule on nondepository CDFI membership) to identify FHLBanks’ authority to expand membership to nondepository CDFIs and FHFA’s oversight authority. We also reviewed FHFA examination policies related to membership and collateral requirement to obtain advances. To determine if membership and advance practices were reviewed and there were any findings, we analyzed each FHLBank’s examination results for fiscal years 2010 through 2013 (the most recent examinations available at the time of our request). We interviewed FHFA and the 12 FHLBanks to further understand examination policies and practices for membership and advances and discuss any FHFA efforts to facilitate broader nondepository CDFI participation in the FHLBank System. We conducted this performance audit from May 2014 to April 2015, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The collateral requirements—specifically the pledge method for loan collateral and haircuts (discounts)—assessed on advances to nondepository community development financial institutions (CDFI) vary from those imposed on depository members. For example, all Federal Home Loan Banks (FHLBanks) require nondepository CDFIs to deliver collateral (a requirement that also would be applied to higher-risk depository institutions), and in some cases, nondepository CDFIs receive higher haircuts than depository institutions. For each FHLBank, we compare the pledge method and haircuts applied to depository institutions and nondepository CDFIs below (see table 5). Most Federal Home Loan Banks (FHLBanks) do not have advance terms and borrowing limits specific to nondepository community development financial institutions (CDFI). However, four FHLBanks (Des Moines, New York, Pittsburgh, and San Francisco) do have specific advance terms and borrowing limits. We summarize the advance terms and borrowing limits for each FHLBank below (see table 6). In addition to the contact named above, Paige Smith (Assistant Director), Akiko Ohnuma (Analyst-in-Charge), Farah Angersola, Evelyn Calderon, Pamela Davidson, Kerri Eisenbach, Courtney LaFountain, John McGrail, Marc Molino, Barbara Roesmann, Jim Vitarello, and Weifei Zheng made key contributions to this report.
The Housing and Economic Recovery Act of 2008 (HERA) made nondepository CDFIs eligible for membership in the FHLBank System. The System includes 12 regional FHLBanks that make loans, known as advances, to their members at favorable rates. GAO was asked to review the FHLBanks' implementation of HERA provisions relating to nondepository CDFIs. Among other things, this report discusses (1) challenges posed by membership and collateral requirements for nondepository CDFIs, and (2) FHFA and FHLBank efforts to facilitate broader nondepository CDFI participation in the System. GAO analyzed data on membership rates as of December 2014 and advances obtained as of September 2014; reviewed requirements for gaining membership and obtaining advances; and interviewed FHLBank and FHFA officials and a sample of nondepository CDFIs based on selected criteria, including geography and asset size. Specifically, GAO interviewed 10 nondepository CDFIs that were members (one from each FHLBank district with a nondepository CDFI member when GAO began work) and 12 that were not members (one from each of the 12 districts). GAO makes no recommendations in this report. GAO provided a draft of this report to FHFA and the 12 FHLBanks for comment. FHFA and four FHLBanks provided technical comments that were incorporated into the report as appropriate. Collateral requirements rather than membership requirements discouraged some nondepository community development financial institutions (CDFI)—loan or venture capital funds—from seeking membership in the Federal Home Loan Bank (FHLBank) System. CDFIs are financial institutions that provide credit and financial services to underserved communities. Less than 6 percent of nondepository CDFIs (30 of 522) were members of the System as of December 2014 (see figure). Requirements for membership (such as stock purchase amounts) can vary where regulation gives FHLBanks discretion, but nondepository CDFIs GAO interviewed generally stated these requirements did not present a challenge. In addition, most FHLBanks imposed collateral requirements on nondepository CDFIs—such as haircuts (discounts on the value of collateral)—comparable with those for depository members categorized as higher risk. (This was sometimes also the case for other nondepository members such as insurance companies.) FHLBank officials stated nondepository CDFIs have different risks compared with depository members (for example, nondepository CDFIs are not supervised by a prudential federal or state regulator as are other FHLBank members). To address these risks, they imposed more restrictive requirements. Some of the nondepository CDFIs GAO interviewed cited limited availability of eligible collateral and steep haircuts as challenges for obtaining advances and therefore a disincentive to seeking membership. Less than half of the nondepository CDFIs that were members as of September 2014 had borrowed from the FHLBanks; the cumulative advances from October 2010 to September 2014 totaled about $307 million (less than 1 percent of the total advances outstanding as of December 2014). Two FHLBanks made the majority of the advances. The Federal Housing Finance Agency (FHFA), which oversees the System, and FHLBanks have facilitated efforts to broaden nondepository CDFI participation in the System by educating about and promoting membership to nondepository CDFIs. For example, FHFA officials told us that they encouraged the FHLBanks to hold a conference to discuss nondepository CDFI membership. Officials from 10 FHLBanks also stated that they had solicited applications from CDFIs. In late 2014, several FHLBanks amended stock purchase and collateral requirements to better accommodate nondepository CDFI membership and access to advances.
You are an expert at summarizing long articles. Proceed to summarize the following text: DOD annually spends about $15 billion for depot maintenance work that includes repairing, overhauling, modifying, and upgrading aircraft, ships, tracked and wheeled vehicles, and other systems and equipment. It also includes limited manufacture of parts, technical support, modifications, testing, and reclamation as well as software maintenance. DOD estimates that about 60 percent of its expenditures for depot maintenance work is performed in its 24 maintenance depots and the remaining 40 percent in the private sector. We have reported that the public-private mix is closer to 50-50 when it includes interim contractor support services and public depot purchases of parts, supplies, and maintenance services from the private sector. Historically public depots have served to provide a ready and controlled source of repair and maintenance. Reductions in military force structure and related weapon system procurement, changes in military operational requirements due to the end of the Cold War, and increased reliability, maintainability, and durability of military systems have decreased the need for depot-level maintenance support. Efforts to downsize and reshape DOD’s maintenance system have addressed depot efficiency and the workload mix between the public and private sectors. A key issue currently being debated within Congress and DOD is the extent to which the private sector should be relied on for meeting DOD’s requirements for depot-level maintenance. Congress, in the National Defense Authorization Act for Fiscal Year 1994, established the Commission on Roles and Missions of the Armed Forces to (1) review the appropriateness of the current allocations of roles, missions, and functions among the armed forces; (2) evaluate and report on alternate allocations; and (3) make recommendations for changes in the current definition and distribution of those roles, missions, and functions. The Commission’s May 24, 1995, report, Directions for Defense, identified a number of commercial activities performed by DOD that could be performed by the private sector. Depot-level maintenance was one of these activities. The Commission concluded that privatizing such commercial activities through meaningful competition was the primary path to more efficient support. It noted that such competition typically lowers costs by 20 percent. Based on its conclusions, the Commission recommended that DOD transition to a depot maintenance system relying on the private sector by, (1) directing support of all new systems to private contractors, (2) establishing a time-phased plan to privatize essentially all existing depot-level maintenance, and (3) creating an office under the Assistant Secretary of Defense (Economic Security) to oversee privatization of depots. In his August 24, 1995, letter to Congress forwarding the Commission report, the Secretary of Defense agreed with the Commission’s recommendations but expressed a need for DOD to retain a limited organic core capability to meet essential wartime surge demands, promote competition, and sustain institutional expertise. DOD’s January 1996 report, Plan for Increasing Depot Maintenance Privatization and Outsourcing, provides for substantially increasing reliance on the private sector for depot maintenance. The CORM, in support of its depot privatization savings assumption, cites reported savings from public-private competitions under OMB Circular A-76. These competitions were for various non-depot maintenance commercial activities, in which there was generally a highly competitive private market. Projected savings were greater for competitions having larger numbers of private sector competitors. The public sector won about half of these competitions. Our analysis indicates that private sector competition for depot maintenance may be much less than found in the A-76 activities. The data also suggests that little or no savings would result from privatizing depot maintenance in the absence of competition. The CORM report cites two studies supporting its savings assumption—one by OMB and one by the Center for Naval Analysis (CNA). Both reports are evaluations of numerous public-private competitions for commercial activities under OMB Circular A-76 guidelines. The commercial activities included base operating support functions such as family housing, real property maintenance, civilian personnel administration, food service, security, and other support services. These activities are characterized by highly competitive markets with low-skill labor, little capital investment, and simple, routine and repetitive tasks that can readily be identified in a contract statement-of-work. None of the competitions studied were for depot maintenance, which generally has dissimilar characteristics. Both reports show that substantial savings occurred when competition was introduced into the noncompetitive environment. However, the reported savings are based on the difference between the precompetition cost and the price proposed and do not reflect subsequent contract cost overruns, modifications, or add-ons. Based on a limited number of audits, projected A-76 privatization savings were often reduced or eliminated as a result of subsequent contract cost growth. The OMB study of commercial activities competed from 1981 to 1988 cited average savings of 30 percent from original government cost with an average 20-percent savings when the government won the competition and 35 percent when the private sector won. About 40 percent of competitions were won by government, 60 percent by the private sector. The CNA study cites a previous CNA review of the Navy’s Commercial Activities Program in which both the public and private sectors each won about half the roughly 1,000 competitions reviewed. The offers where the public sector won were roughly 20 percent lower than the precompetition cost baseline, whereas winning offers from private firms averaged 40 percent below earlier costs. The report noted that larger private sector savings occurred when activities were performed predominately by military personnel. Nearly all depot maintenance work is performed by DOD civilians. In 29 percent of the cost studies reviewed, there were no cost savings. These studies did not specifically address outsourcing to the private sector when the public sector did not participate in the competition. Since the government’s costs were lower in about half the cases, these savings would not have been realized without public competition. Further, in limited situations where audits have been conducted, projected savings have not been verified. For example, a 1989 Army Audit Agency report summarizing the results of prior commercial activities reviews stated that for 10 functions converted to contractor performance, only $9.9 million of $22 million in projected savings were realized. Performance work statement deficiencies, mandatory wage rate increases received by contractor personnel, and higher-than-estimated contract administration costs accounted for about 90 percent of the reduction in estimated savings. Our 1990 report on OMB Circular A-76 savings projections found (1) costs of conducting the competitions were not considered in estimating savings, (2) savings figures were projections and were not based on actual experience, (3) DOD lacked information regarding modifications made after the cost study, (4) DOD’s A-76 database contained inaccuracies and incomplete savings data, and (5) an error in design resulted in a computer program that miscalculated program savings. A July 1995 Congressional Budget Office report entitled Public and Private Roles in Maintaining Military Equipment at the Depot Level stated that contracting out was most likely to outperform public depots if competition existed among private firms. The report noted, however, that without competition, the private sector’s ability to provide service for the least cost could be reduced and the risk of poor-quality or nonresponsive support could increase. The CORM report also states that savings occur when meaningful competition is obtained in a previously sole-source area and public-private competitions are preferable to noncompetitive awards to the private sector. The CORM recognized that privatizing essentially all depot maintenance would require a time phased approach. Under current conditions, privatizing essentially all depot workloads (1) would not likely achieve expected savings and could prove more costly, (2) could adversely impact readiness, and (3) would be difficult if not impossible under existing laws. These conditions are discussed below. Limited competition and excess depot capacity could negate expected savings. The CORM assumed depot workload privatization savings would result from private sector competition. We found that much of the depot work contracted to the private sector is awarded noncompetitively and that obtaining competition for remaining non-core depot workloads may be difficult and costly. In addition, privatizing depot workloads without reducing excess depot capacity could significantly increase the cost of work performed by the depots. The CORM’s recommendation to privatize essentially all depot maintenance assumed that meaningful competition would be obtained for most of the work. The Commission generally defined meaningful competition as that generated by a competitive market, including significant numbers of both buyers and sellers. Our review of selected DOD depot maintenance contracts found that a large portion of the awards were not made under these conditions. To determine the extent of competition in awarding depot maintenance contracts, we reviewed 240 such contracts totaling $4.3 billion at 12 DOD buying activities. We selected high-dollar value contracts from a total of 8,452 open 1995 depot-level maintenance contracts that were valued at $7.3 billion. As shown in table 1, 182 of the 240 contracts—76 percent—were awarded on a sole-source basis. These contracts accounted for 45 percent of the total dollar value. In nine other contracts accounting for about 4 percent of the total, competition was limited to only two offerors. The remaining 49 contracts were classified as awarded through full and open competition. These awards accounted for 51 percent of the total dollar value. However, some had only limited responses. For example, the number of offerors was 2 in each of 5 contracts totaling $525.8 million—24 percent of the total award value for the 49 competed contracts. Original equipment manufacturers were awarded 158 of the 182 noncompetitive contracts. The remaining 24 were awarded on a sole-source basis for reasons such as peculiar requirements, national emergencies, and international agreements. Where competition was limited, the OEMs won eight of the nine workloads. The OEMs also won 9 of the 49 contracts that DOD classified as awarded pursuant to full and open competition. Table 2 shows the number of offers received for the contracts classified as awarded pursuant to full and open competition. The buying activities awarded the maintenance contracts to 71 different contractors but 13 of these contractors had received workloads valued at $3.3 billion—76 percent of the total amount awarded. Table 3 shows the distribution of the workload to the 71 contractors. Although DOD plans to privatize non-core workloads currently in the public depots, it has not assessed the extent that such workloads will attract private sector competition. Factors that resulted in noncompetitive awards for much of the depot work currently performed by the private sector, may apply to much of the work currently performed by public depots. The types of existing public workloads where private sector competition may be limited include: (1) workloads where data rights necessary for competition have not been acquired, (2) small workloads that do not justify large private sector capital investment costs, (3) workloads for older and/or highly specialized systems, (4) workloads with erratic requirements where DOD cannot guarantee a stable workload, and (5) workloads that would be costly to move from one source of repair to another. These factors could further limit cost-effective privatization of existing workloads. For example, our review of 95 non-ship depot maintenance public-private competitions found that 22 did not receive any private sector offers and 33 only had 1. DOD may have to acquire the technical data rights to compete many of its weapon systems. The most-often-cited justification for the 182 sole-source awards was that competition was not possible because DOD did not own the technical data rights for the items to be repaired. Command officials stated that DOD will have to make costly investments in order to promote full and open competition for many of its weapon systems. For example, in its justification for less than full and open competition for the repair and testing of the AN/URQ-33 Joint Tactical Information Distribution System, the Warner Robins Air Logistics Center noted that the technical data was not procured from the original equipment manufacturer and estimated that $1 million and a minimum of 6 months would be required to start up a new contractor. Similarly, the Army Missile Command’s justification for a sole-source maintenance and repair award to the original equipment manufacturer for the AH-58D Kiowa Warrior helicopter, noted that the program manager had not procured the technical data package due to funding and cost restraints. The command estimated that technical data suitable for full and open competition would cost about $18 million. The difficulty of accurately describing or quantifying depot maintenance requirements may impact privatization savings. Under fixed-price contracts, more of the risks are incurred by the contractor. If costs are greater than expected, then the contractor incurs the loss. The government incurs more risk under a cost reimbursable contract. Under such contracts, the government generally reimburses the contractor for the costs incurred. Accordingly, the contractor’s incentive to maximize efficiency and minimize cost is generally greater under a fixed-price contract. Cost reimbursable contracts are often used when contract requirements cannot be adequately described and/or costs accurately estimated. Such contracts are used for many depot maintenance workloads. Our analysis of the 240 contracts showed that the commands used fixed-price contracts in 151 (or 63 percent ) of the 240 contracts, cost-reimbursable type contracts in 61 contracts, and a combination of the 2 types in 28 contracts. Table 4 shows the types of contracts the commands were using to acquire depot-level maintenance. The buying activities said they used fixed pricing in the 151 contracts because adequate repair histories were available to establish a price range for the maintenance work. In using 61 cost-reimbursement type contracts, DOD officials stated that the maintenance requirements could not be predetermined for the contract period or that no adequate repair history existed to establish reasonable price ranges. Non-core workloads that may be good candidates for privatization—that is, a competitive private market exists—may not be cost-effective to privatize if it results in increased excess capacity and other inefficiencies in the public depots. Given the requirement to preserve public depot capabilities, DOD must manage depot maintenance workloads to assure efficient operations. In some cases where privatizing a particular workload could produce some level of savings, the savings could be more than offset by creating inefficiencies in the remaining public depots. For example, the Air Force’s Oklahoma City Air Logistics Center currently has about 43 percent excess capacity. Had DOD decided to reallocate the engine workload from the closing San Antonio Center to Oklahoma City instead of privatizing the workload in place, the labor hour rate for all of the Oklahoma City Center’s work would be reduced by $10 an hour. Such a reduction could save about $70 million a year. Our analysis of depot maintenance work currently contracted with the private sector found that contractors, for the most part, were responsive to DOD’s needs in terms of meeting contractual requirements for delivery and performance. However, service officials stated that historically, the flexibility and responsiveness of DOD depots had significantly influenced decisions to select a DOD depot rather than a contractor for most critical military systems. The military services have considered the readiness and sustainability risks of privatizing existing depot workloads and determined that the risks for privatizing most workloads were too high. In the past, these assessments provide the primary justification for maintaining a large organic depot maintenance core capability. DOD is implementing a new depot maintenance policy that is likely to significantly increase the depot maintenance workloads performed by the private sector. Based on the policy preference for contractor maintenance, DOD is now conducting risk assessments on workloads previously designated as core. In many cases, the services are redesignating mission essential core workloads as non-core. DOD’s March 1996 depot workload report to Congress, which reflects its latest “core” workload determinations, projects that the fiscal year 1997 depot workload mix of about 60 percent public and 40 percent private will shift to about a 50/50 mix by fiscal year 2001. However, these projections were not developed using the DOD’s new risk assessment process. We recently reported that DOD’s ongoing risk assessment process will likely result in an even greater shift of depot maintenance workload to the private sector. As required by the fiscal year 1996 Defense Authorization Act, we analyzed and reported on DOD’s March 1996 depot workload report. We noted that the DOD’s risk assessment process is based to a large extent on subjective judgements. Further, DOD’s methodology for assessing workload privatization risks does not include guidance or criteria for the services to use in making such assessments. As a result, the services individual risk assessments may not be consistent within the services or uniform among the services. The CORM report stated that DOD core depot requirements exceed the real needs of the national security strategy and that with proper oversight private contractors could provide essentially all of the depot-level maintenance services now conducted in government facilities. To evaluate contractor support and responsiveness for the workloads currently in the private sector, we analyzed contract modifications to 195 of the 240 contacts reviewed. We only found indications of contractor performance problems in four of these contracts. These involved extensions to the period of performance due to the contractors not meeting the required delivery dates. However, DOD materiel managers noted that DOD depots provide greater flexibility than contractors and can more quickly respond to nonprogrammed, quick-turnaround requirements. Further, DOD contracting personnel stated that contract files may or may not provide a reasonable assessment of readiness impacts. For example, these files would provide no indication of the impacts of cost growth on DOD’s ability to procure required depot maintenance services. In recommending that essentially all depot maintenance work be privatized, the Commission recognized that privatization could be limited or precluded by a collection of laws, regulations, and historic practices developed to protect the government’s depot maintenance capability. Among the barriers cited were 10 U.S.C. 2469, which requires public-private competitions before any workload over $3 million can be moved to the private sector from a public depot, and 10 U.S.C. 2466, which sets the amount of depot-level maintenance workload that must be performed in public depots to not less than 60 percent, that is, the 60/40 rule. Since the concept of core requirements centers around the determination of acceptable levels of risks, the size and extent of core capability and requirements can become somewhat subjective. Accordingly, the amount of depot work subject to privatization may be driven in part by the 60/40 rule. DOD is seeking repeal of these and other laws in order to fully implement its depot privatization plans. For example, in May 1996, DOD proposed a provision that would allow the Secretary of Defense to acquire by contract from the private sector or any nonfederal government entities those commercial or industrial type supplies and services necessary or beneficial to the accomplishment of DOD’s authorized functions, notwithstanding any provision of title 10 or any statute authorizing appropriation for or making DOD appropriations. This proposal was not supported by the DOD authorization committees during deliberations over the fiscal year 1997 DOD authorization bill. The CORM recognized that there are instances where establishing competition within the private sector would be too costly. In these cases, the Commission stated that public-private competition, however imperfect, was generally preferable to noncompetitive contracts. The CORM assumed, however, that there were only a few cases in which such competitions would be required. We found that requirements for and benefits of such competitions may be greater than assumed. As noted earlier in this report, most depot workloads currently contracted to the private sector are noncompetitive and obtaining private sector competition for those workloads currently in the public depots could prove difficult and costly. In examining DOD’s experience with public-private competition for depot-level maintenance, we found that the competitions generally resulted in savings, but precisely quantifying the savings is difficult because many other variables affect maintenance costs. We also found that some workloads are not well suited for competing—either private-private or public-private. DOD’s experience with public-private competition for depot-level maintenance began in 1985 when Congress authorized the Navy to compete shipyard workloads. In 1991, with DOD’s push to promote efficiency in depot maintenance operations and the Navy’s assertion that competition encouraged public shipyards to become more efficient, Congress permitted the Air Force and the Army to conduct public-private competitions for depot-level maintenance workloads. DOD had planned to use the program for allocating maintenance workloads to the most cost-efficient providers and to save $1.7 billion as part of its strategy to achieve an overall $6.3 billion reduction in depot maintenance costs from fiscal years 1991 to 1997. However, DOD suspended the program in May 1994 and reported to Congress in February 1995 that competition could not be reinstituted until its cost accounting and data systems permitted actual cost accounting for specific workloads. During our review of the Navy’s public-private competition program for aviation maintenance, Navy officials stated that such competitions had been beneficial to the government and resulted in maintenance savings for the involved workloads. They stated that competitions for workloads that had previously been assigned to Navy depots resulted in the Navy depots streamlining overhead, improving work processes, reducing labor and material requirements, and instituting other cost-saving initiatives in order to submit the lowest bids and avoid job losses. For example, the public-private competition for F-14 aircraft airframe overhauls—a competition won by a Navy depot—resulted in the depot reducing the average cost per overhaul from $1.69 million the year preceding the competition to $1.29 million, in inflation adjusted dollars, the year following the competition, a 24-percent decrease. A number of factors have limited DOD public-private competitions. They include: (1) private sector concerns regarding the fairness of competitions; (2) the time and cost of contract solicitation, award, and administration; (3) declining depot requirements and the inability to guarantee stable workloads; (4) lack of government-owned technical data packages; and (5) limited sources of repair, and low-dollar value workloads that generate little or no interest from the private sector. An April 1994 DOD task force report on depot-level activities identified several concerns with continuing public-private competitions. For example, efficiencies achieved would not be as likely in the future because the costs of conducting competitions were high and the payoffs would be progressively smaller as workloads were recompeted. Critics of public-private competitions charge that such competitions are inherently unfair because DOD’s accounting and financial management systems do not capture and reflect all the costs. In February 1995, DOD reported to the House and Senate Appropriations Committees that its automated financial management systems and databases did not provide an accurate basis for determining the actual cost of specific competition workloads. To remedy this situation, DOD was developing policies, procedures, and automated processes that would permit actual cost accounting for specific workloads accomplished in public depots. Our January 1996 report to the Ranking Minority Member, Subcommittee on Defense, Senate Committee on Appropriations, summarized many actions DOD had taken to improve public-private competitions. Among these actions were (1) the development of a cost comparability handbook that, among other things, identified adjustments that should be made to public depots’ offers as a result of differences in the military services’ accounting systems and (2) having the Defense Contract Audit Agency certify that successful offers included comparable estimates of all direct and indirect costs. We noted that the incentive to continue with some of the initiatives was lost after DOD terminated public-private competitions. We also identified additional actions that DOD could take to further improve competitions, for example, provide the Defense Contract Audit Agency the technical support needed to properly evaluate depot offers and to conduct an incurred cost audit to assess whether depots are able to perform work as offered. Our report also summarized the Navy’s suggestions for addressing concerns regarding public depot cost overruns and administration costs resulting from competitions. These included establish fixed prices for the competed work based on offer amounts, execute the work like normal workload using existing control systems with no separate contract administration, and assess penalties for cost overruns to make the depot less competitive in future competitions. DOD officials declined to comment on this report. They noted that the draft report we provided for comment included no recommendations and did not require a response. Further, the report addresses assumptions of the Commission on Roles and Missions of the Armed Forces, a group established by Congress that no longer exists. While the Commission on Roles and Missions was not a DOD entity, in forwarding the Commission’s report to Congress, the Secretary of Defense stated that DOD agreed with the Commission’s recommendation to outsource a significant portion of DOD’s depot maintenance work. Further, DOD’s January 1996 report on outsourcing depot maintenance cited the Commission’s savings projections as its rationale for its depot privatization initiative. Appendix I sets forth our scope and methodology. We will continue evaluating DOD’s actions on its plans to privatize depot-level maintenance to complete our response to issues raised by the National Security Committee. We are sending copies of this report to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Director of the Office of Management and Budget; and interested congressional committees. Copies will be made available to others upon request. If you or your staff have any questions concerning this report, please contact me on (202) 512-8412. Major contributors to this report are listed in appendix II. The Chairman of the House Committee on National Security asked us to comment on the May 1995 report by the Commission on Roles and Missions of the Armed Forces that recommended the Department of Defense (DOD) privatize its depot-level maintenance activities. The Chairman requested that we review a number of issues related to the Commission’s report; this report provides information on the Commission’s assumptions that privatization could reduce maintenance costs by 20 percent and the potential impact of privatization on military readiness and sustainability. It also identifies some areas DOD may need to improve if it moves toward total privatization of depot-level maintenance. To evaluate the Commission’s assumptions about cost savings from privatization and the impact that it might have on readiness and sustainability, we reviewed its report, discussed the assumptions with former staff members of the Commission, and reviewed supporting data that the Commission had maintained. We made extensive use of our prior work and the work of others on issues related to DOD’s depot-level maintenance operations to determine how consistent the Commission’s work was with prior findings, conclusions, and recommendations. In addition, we analyzed selected depot-level contracts to evaluate (1) the extent to which DOD used competitive procedures in awarding the contracts and (2) how well the contractor performance responded to DOD’s depot-level maintenance needs. We performed our review at the following: Four Army buying activities: the Aviation and Troop Support Command (ATCOM), St. Louis, Missouri; the Communications-Electronics Command (CECOM), Fort Monmouth, New Jersey; the Missile Command (MICOM), Redstone Arsenal, Alabama; and the Tank-Automotive and Armaments Command (TACOM), Warren, Michigan. Five Air Force buying activities: Odgen Air Logistics Center (OO-ALC), Hill Air Force Base, Utah; Oklahoma City Air Logistics Center (OC-ALC), Tinker Air Force Base, Oklahoma; Sacramento Air Logistics Center (SM-ALC), McClellan Air Force Base, California; San Antonio Air Logistics Center (SA-ALC), Kelly Air Force Base, Texas; Warner Robins Air Logistics Center (WR-ALC), Robins Air Force Base, Georgia. Three Navy buying activities: the Naval Inventory Control Point (NICP), Mechanicsburg, Pennsylvania; Naval Inventory Control Point (NICP), Philadelphia, Pennsylvania; and Naval Air Systems Command (NAVAIR), Arlington, Virginia. DOD maintains a database on all contract awards that contains data on awards made by competition and awards that are made by other than competition. We did not use this database to evaluate DOD’s use of competitive procedures for depot-level maintenance because a test at one Army command showed coding errors and difficulty in identifying maintenance contracts. Therefore, we asked each buying activity to identify all depot-level maintenance contracts that were open at a given point during 1995 for use in evaluating the extent they had used competitive procedures and contractor performance. Each buying activity provided a list of contracts from their database. We did not attempt to verify the accuracy of the buying activities’ databases. The data contained a large number of small contracts. For timeliness, we chose to cover dollar value rather than numbers of contracts. We arranged the dollar value of the contracts from highest to lowest and selected high-dollar value contracts that would provide us at least 50-percent coverage of the total dollar value awarded by each service. Table I.1 presents the universe of contracts identified and our sample size. At the buying activities we visited, we reviewed the files of selected contracts to identify cost, schedule, and performance issues. We also discussed the contracting process and contractor performance with contracting officers, negotiators, and specialists. To identify contract types and contracting methods suitable for depot-level maintenance, we reviewed the Federal Acquisition Regulation and DOD supplements and talked to personnel from the Defense Contract Audit Agency and Defense Contract Management Command. We conducted our review between February 1995 and April 1996 in accordance with generally accepted government auditing standards. Julia C. Denman Karl J. Gustafson M. Glenn Knoepfle Frank T. Lawson John M. Ortiz Enemencio Sanchez Jacqueline E. Snead Edward A. Waytel James F. Wiggins Bobby R. Worrell Cleofas Zapata, Jr. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined the Commission on Roles and Missions' (CORM) privatization assumptions to determine whether privatization would adversely affect military readiness and sustainability. GAO found that: (1) the CORM's depot privatization savings and readiness assumptions are based on conditions that do not currently exist for many depot workloads; (2) privatizing essentially all depot maintenance under current conditions would not likely achieve expected savings and, according to the military services, would result in unacceptable readiness and sustainability risks; (3) the extent to which DOD's long-term privatization plans and market forces will effectively create more favorable conditions for outsourcing is uncertain; (4) the CORM assumed a highly competitive and capable private market exists or would develop for most depot workloads; (5) however, GAO found that most of the depot workloads contracted to the private sector are awarded noncompetitively, mostly to the original equipment manufacturer; (6) additionally, a number of factors would likely limit private sector competition for many workloads currently in the public depots; (7) the CORM data does not support its depot privatization savings assumption; (8) the CORM's assumption is based primarily on reported savings from public-private competitions for commercial activities under Office of Management and Budget (OMB) Circular A-76, but these commercial activities were generally dissimilar to depot maintenance activities because they involved relatively simple, routine, and repetitive tasks that did not generally require large capital investment or highly skilled and trained personnel; (9) GAO's analysis of depot maintenance workloads currently contracted to the private sector found, for the most part, that the contractors were responsive to contract requirements for delivery and performance; (10) however, DOD officials noted that DOD depots provide greater flexibility than contractors and can more quickly respond to nonprogrammed, quick-turnaround requirements; (11) the military services periodically assess the readiness and sustainability risks of privatizing depot workloads, and if the risks are determined to be too high, the workloads are retained in the public depots; (12) the CORM assumed that public-private competitions would only be used in the absence of private sector competition and would be limited to only a few cases; (13) public-private depot maintenance competitions have resulted in savings and benefits and can provide a cost-effective way of making depot workload allocation decisions for certain workloads; and (14) the beneficial use of such competitions could have significantly more applicability than the Commission assumed.
You are an expert at summarizing long articles. Proceed to summarize the following text: The past decade has seen an increasing emphasis in the United States on the role of state and local entities in the fight against violent extremism. More recently, in August 2011, the White House issued the nation’s first CVE strategy, Empowering Local Partners to Prevent Violent Extremism in the United States, and in December 2011, it issued an implementation plan for the CVE national strategy. The strategy leverages existing programs and structures in order to counter radicalization that leads to violence, rather than creating new programs and funding streams. The strategy highlights three major areas of activity: (1) enhancing engagement with and support to local communities that violent extremists may target, (2) building government and law enforcement expertise for preventing violent extremism, and (3) countering violent extremist propaganda while promoting U.S. ideals. The strategy also identifies the provision of training to federal, state, and local entities as a major component of the national CVE approach, and the implementation plan notes that the federal government will enhance CVE- related training offered to federal, state, and local agencies. The implementation plan states that this is necessary because of “a small number of instances of federally-sponsored or funded CVE and counterterrorism training that used offensive and inaccurate information.” Accordingly, one of the objectives of the implementation plan is to improve the development and use of standardized training with rigorous curricula that imparts information about violent extremism, improves cultural competency, and conveys best practices and lessons for effective community engagement and partnerships. The implementation plan designates federal departments, agencies, and components as leaders and partners regarding certain aspects of CVE, and DHS and DOJ have principal roles in implementing the CVE national strategy. Table 1 identifies the primary federal departments and agencies with CVE-related responsibilities and their respective missions. Other agencies involved in implementing the strategy include the Departments of the Treasury, Education, and Commerce, among others. The CVE national strategy implementation plan assigns both DHS and DOJ responsibility for supporting national CVE-related training efforts and emphasizes the importance of collaboration among federal, state, local, and tribal government agencies in order to achieve the goals of the strategy. In order for DHS and DOJ components to determine the extent to which they are fulfilling departmental CVE-related responsibilities, they must be able to identify which of the training they conduct is CVE-related, which requires that they understand what constitutes CVE-related training. The DHS Counterterrorism Working Group, the entity responsible for leading DHS’s CVE efforts under the direction of the Principal Deputy Counterterrorism Coordinator, has identified topics to be addressed in CVE-related training that DHS develops, provides, or funds. The group has also undertaken efforts to communicate these topics to other DHS components, state and local law enforcement officials, and grant recipients who may allocate DHS funding for CVE-related training within their states. DHS’s communication efforts have helped DHS components and state and local partners to better understand what constitutes CVE-related training, but some DHS grantees who responded to our survey reported that they were not clear as to what topics should be addressed in CVE-related training, and most indicated that it would be helpful for DHS to provide additional information or guidance on topics covered under CVE. DHS plans to undertake additional communication efforts with these grantees to educate them about the principal topics CVE-related training addresses. In contrast, DOJ has not identified topics it considers as CVE-related training. Consequently, DOJ is unable to demonstrate how it is meeting its CVE responsibilities under the CVE national strategy. In February 2010, the Secretary of Homeland Security tasked the Homeland Security Advisory Council (HSAC) with developing recommendations regarding how DHS can better support community- based efforts to combat violent extremism domestically, focusing on the issues of training, information sharing, and the adoption of community- oriented law enforcement approaches. The council established the HSAC CVE Working Group to carry out this tasking, and the working group issued its findings in summer 2010. The HSAC CVE Working Group determined that CVE-related training should focus on (1) improving the capacity of law enforcement and other government personnel to communicate and collaborate with individuals from diverse religious, ethnic, and racial communities, and (2) promoting understanding of the threats facing a local community and recognizing behavior and indicators associated with those threats. The DHS Counterterrorism Working Group subsequently determined that, in order to support implementation of the CVE national strategy and the HSAC CVE Working Group findings, CVE- related training should address the following: violent extremism (e.g., the threat it poses), cultural demystification (e.g., education on culture and religion), community partnerships (e.g., how to build them), and community policing efforts (e.g., how to apply community policing efforts to CVE). According to the DHS Principal Deputy Counterterrorism Coordinator, identifying these topics helped to provide a logical structure for DHS’s CVE-related training–related efforts. The Counterterrorism Working Group has undertaken efforts to communicate these topics to DHS components that contribute to DHS CVE-related training. Toward the beginning of our review officials from DHS components that contributed to training in fiscal years 2010 and 2011 that was CVE-related according to our framework cited lack of clarity regarding what topics CVE-related training is to address; however, by August 2012, the components reported that the topics were clear, a fact that they attributed to these communications efforts. The Counterterrorism Working Group communicated CVE-related training topics to relevant DHS components during weekly meetings as well as by involving the components in the development of new CVE-related training. For example, the Counterterrorism Working Group has invited relevant components to participate in workshops on CVE-related training, provided them with briefings and updates on its CVE-related training development efforts, and included them in review of draft CVE curricula. According to Counterterrorism Working Group officials, the group led a series of meetings with these components to communicate and review the content of multiple CVE-related trainings the group is working to develop. According to officials from relevant DHS components, these communication efforts have helped to clarify topics CVE-related training addresses. For example, according to the official that leads CVE-related training that the Office for Civil Rights and Civil Liberties provides, reviewing the CVE curricula under development involves ensuring that training topics are clear and well understood. In addition, according to the S&T official who oversees research on CVE that is to inform CVE-related training content, DHS officials have clearly communicated topics that CVE-related training is to include during weekly meetings that the Counterterrorism Working Group leads involving all DHS CVE Working Group members. The Counterterrorism Working Group also communicated with state and local partners and associations that DHS collaborates with to achieve national CVE goals regarding DHS’s CVE-related training topics. For example, according to the director of a state police academy and a police department lieutenant, the Counterterrorism Working Group has consistently consulted with them in developing training modules addressing CVE topics. The Counterterrorism Working Group is also collaborating to develop and implement CVE-related training curricula with the Major Cities Chiefs Association (MCC), the National Consortium for Advanced Policing (NCAP), and the International Association of Chiefs of Police (IACP). As reported by the official who oversees CVE-related training that the DHS Office for Civil Rights and Civil Liberties provides, such collaboration inherently entails discussion of topics CVE-related training is to address. DHS’s communication efforts have helped DHS components and state and local partners to better understand what constitutes CVE-related training, but our review indicates that some state administrative agency representatives are not clear about the principal topics CVE-related training addresses, making it difficult for them to determine what CVE- related training best supports national CVE efforts. According to officials from FEMA, which administers DHS grant funding, the agency has increased grant funding available for CVE-related training because the Secretary of Homeland Security has identified CVE efforts as a priority for the department. In particular, in fiscal year 2011, FEMA began to allow state and local entities to use funds awarded through the Homeland Security Grant Program for CVE-related training. Further, in fiscal year 2012, FEMA explicitly stated in its Homeland Security Grant Program funding announcement that grantees could use program funds for CVE- related training, and retroactively allowed recipients to use program funds from prior years for CVE activities. In July 2012, we surveyed the 51 training points of contact within state administrative agencies—which are responsible for managing Homeland Security Grant Program funds that DHS awards—about the extent to which they understand what is meant by CVE training. Of the 30 training points of contact who responded to our survey, 11 indicated that they were not at all clear or were somewhat clear on what is meant by CVE-related training. Further, 26 agreed or strongly agreed that it would be helpful for DHS to provide additional information or guidance on topics covered under CVE. As long as FEMA continues to make grant funding available for CVE-related training, but grantees do not have an understanding of what topics CVE-related training should address, it will be difficult for grantees to determine what training best supports the national CVE objective of improving CVE- related training and use funds appropriately toward those efforts. DHS Counterterrorism Working Group officials stated that the group had made efforts to communicate CVE-related training topics to state administrative agencies, but in light of our survey results, the group plans to expand its efforts. In winter 2011, the Principal Deputy Counterterrorism Coordinator, who leads DHS CVE efforts, participated in a conference call with State Homeland Security Program advisers and staff who administer DHS grants that can be used for CVE-related training, during which this official highlighted DHS’s CVE-related training efforts and associated guidance. Nonetheless, according to the Principal Deputy Counterterrorism Coordinator, some training points of contact may not be aware of what topics CVE-related training should address because the working group’s coordination efforts have focused on state and local representatives who administer law enforcement training programs (e.g., at police academies), not state administrative agencies. The Principal Deputy Counterterrorism Coordinator also emphasized that DHS has focused its efforts on developing high-quality CVE-related training that state and local entities can readily access and that FEMA will pre approve as eligible for DHS grant funding. As a result, according to this official, grantees will rarely have to independently identify appropriate CVE-related training to fund or undertake steps to ensure the quality of CVE-related training they fund. Nevertheless, the Principal Deputy Counterterrorism Coordinator agreed that our survey results revealed that it is important for DHS to undertake additional efforts to educate state administrative agency officials on the principal topics CVE-related training addresses. To that end, in August 2012, the Principal Deputy Counterterrorism Coordinator held an additional meeting with more than 100 state administrative agency representatives and other federal, state, and local officials, during which the Coordinator provided information on DHS CVE-related training development efforts and the content of DHS’s CVE-related training, among other things. In addition, in August 2012, DHS, in partnership with the FBI, launched an online portal for a select group of law enforcement training partners that is intended to provide federal, state, local, tribal, territorial, and correctional law enforcement with access to CVE-related training materials. DHS aims to broaden access to the portal to trainers nationwide by the end of September 2012. Further, the Principal Deputy Counterterrorism Coordinator stated that the Counterterrorism Working Group is developing an outreach strategy for communicating with state and local entities about DHS’s CVE-related training efforts. Given the recency of these efforts, we are not able to assess their effectiveness as part of our review. However, they are positive steps that should contribute to educating state administrative agency representatives about CVE topics, and thereby help them to fund CVE-related training that is consistent with the goals of the CVE national strategy. As with DHS, the CVE national strategy implementation plan has identified DOJ, including the FBI, as among the federal departments and agencies responsible for conducting CVE-related training. However, DOJ has not yet identified topics that should be covered in its CVE-related training. In addition, DOJ has not generally identified which of its existing training could be categorized as CVE-related training, thus limiting DOJ’s ability to demonstrate how it is fulfilling its training responsibilities under the CVE national strategy. According to senior DOJ officials, even though the department has not identified CVE-related training topics, they understand internally which of the department’s training is CVE-related and contributes either directly or indirectly to the department’s training responsibilities under the CVE national strategy. However, because DOJ has not identified what constitutes CVE-related training, CVE-related efforts undertaken at the direction of the President’s National Security Staff have been hindered, according to DHS officials who participated in an Interagency Policy Committee Working Group on Law Enforcement Training Regarding Domestic Radicalization and CVE. This group, which is chaired by DHS and NCTC, was formed at the direction of the President’s National Security Staff to identify and coordinate CVE-related training that federal agencies deliver or fund. The group’s principal objective was twofold: (1) to determine how agencies are currently developing training and (2) to identify options for ensuring that the Intelligence Community’s current analysis of radicalization informs training for federal, state, local, and tribal officials, and that customers of this type of training receive high- quality training and information consistent with U.S. government analysis. As part of this effort, the Interagency Policy Committee Working Group on Law Enforcement Training Regarding Domestic Radicalization and CVE endeavored to create an inventory of CVE- related training that the federal government offers. However, according to DHS officials that participated in the working group, members who led this effort found it challenging to do so because agencies’ views differed as to what CVE-related training includes when providing information on their training. More specifically, according to one DHS official, some components found it difficult to differentiate between counterterrorism and CVE-related training, and trying to categorize training that was not developed for CVE purposes but that can benefit CVE can be confusing. We observed this problem firsthand during our review when the DOJ components that the department identified as potentially relevant to our work, including the FBI, Executive Office for United States Attorneys, and Office of Community Oriented Policing Services could not readily respond to our requests for information about CVE-related training they provide or fund. According to these officials, they found it difficult to respond to our requests because DOJ has not established a definition for “CVE” or “CVE-related training,” and therefore they were not sure what constitutes CVE-related training.acknowledged that training that BJA funds under the State and Local Anti-Terrorism Training (SLATT) program could be considered CVE- related training, but they also acknowledged that what constitutes CVE- related training was not clear, in part because CVE is a relatively new term. The other DOJ components, however, relied upon a framework that we developed for the purpose of this review to determine which of their existing training was CVE-related. The Community Relations Service is DOJ’s “peacemaker” for community conflicts and tensions arising from differences of race, color, and national origin. It is dedicated to assisting state and local units of government, private and public organizations, and community groups with preventing and resolving racial and ethnic tensions, incidents, and civil disorders, and in restoring racial stability and harmony. According to DOJ, pursuant to the Matthew Shepard and James Byrd, Jr. Hate Crimes Prevention Act, the Community Relations Service also works with communities to develop strategies to prevent and respond more effectively to alleged violent hate crimes committed on the basis of race, color, national origin, gender, gender identity, sexual orientation, religion, or disability. See generally Pub. L. No. 111-84, Div. E, 123 Stat. 2190, 2835 (2009). See also 18 U.S.C. § 249. and explicitly emphasize the importance of community engagement in CVE efforts while recognizing that such engagement should focus on a full range of community concerns, and not just on issues such as national security. Further, the implementation plan has assigned DOJ responsibility for supporting national CVE-related training efforts. However, because DOJ has not identified what topics it thinks should be addressed by CVE-related training, it is difficult to identify which of DOJ’s current training is related to CVE—either directly or indirectly, which also makes it difficult to determine whether and how DOJ is fulfilling its training responsibilities per the CVE national strategy. If departments are unclear regarding what constitutes CVE-related training, they will also have difficulty accounting for their CVE-related training responsibilities. By not identifying and communicating CVE- related training topics to its components, DOJ is not able to demonstrate how it is fulfilling its CVE-related training responsibilities and ensure that it is carrying out its responsibilities as established in the CVE national strategy implementation plan. Less than 1 percent of state and local participants in CVE-related training that DHS and DOJ provided or funded who provided feedback to the departments expressed concerns about information included in the course materials or that instructors presented during training. In addition, while DOJ generally solicits feedback from all participants for programs that provide formal, curriculum-based CVE-related training, the FBI and USAOs do not always solicit feedback for programs that provide less formal CVE-related training (e.g., presentations by guest speakers), even though such training was provided to about 9,900 participants in fiscal years 2010 and 2011. Finally, apart from the training participants, some individuals and advocacy organizations have raised concerns about DHS and DOJ CVE-related training. As previously discussed, because DHS and DOJ components were unclear regarding what constitutes CVE-related training, for the purposes of conducting this review, we developed a framework for determining which training may be CVE-related. Our framework identifies training as CVE-related if it addressed one or more of the following three content areas: (1) radicalization, (2) cultural competency, and (3) community engagement. DHS Counterterrorism Working Group officials generally agreed with the content areas we identified, and we incorporated feedback the group provided, as appropriate. DOJ officials stated that they view the framework as reasonable for the purpose of our review. However, as previously discussed, DOJ officials do not think it is appropriate for DOJ to identify topics as addressed in CVE-related training. We applied our framework to identify CVE-related training DOJ and DHS components provided to state and local entities during fiscal years 2010 and 2011. Figure 1 presents the DOJ and DHS programs that provided the CVE-related training we identified, and appendix III provides more detailed information about the training, including the number of participants and associated costs. The majority of participant feedback on CVE-related training that DHS and DOJ provided or funded during fiscal years 2010 and 2011 was positive or neutral; a minority of participants expressed concerns about information included in course materials or that instructors presented. DHS and DOJ collected and retained feedback forms from 8,424 of the more than 28,000 participants—including state, local, and tribal law enforcement officials, prison officials, and community members—of training they provided or funded in fiscal years 2010 and 2011 that was CVE-related according to our framework. We analyzed all of these evaluations and found that the vast majority of participants submitted comments about the training that were positive or neutral. For example, participants commented that the courses were among the most challenging they had taken, that the instructors were professional and knowledgeable, or that the course materials were well assembled. In addition, participants stated that the training was informative with regard to the threat posed by, and how to best counter, violent extremists or provided a valuable overview of an extremist group. In another instance, a participant stated that the course was helpful in understanding the beliefs and concerns of a particular community. Some participants also said that the training would be worthwhile to provide to a broader audience, that they intended to share what they learned with colleagues, or that they would like to see the course length expanded. We also identified 77 participant evaluations—less than 1 percent—that included comments that expressed concern of any sort. For example, we identified concerns that a training was too politically correct, as well as concerns that a training was one-sided, with regard to issues of religion and culture. The concerns the participants expressed fell into the following three categories: 1. The course information or instruction was politically or culturally biased (54 evaluations). For example, participant comments that fell into this category were that the instructor had a liberal bias, and other comments were that the instructor too often relayed his or her personal views. 2. The course information or instruction was offensive (12 evaluations). For example, one concern raised in this category was that an instructor presented Islam in a negative manner, whereas another concern was that a guest presenter spoke disrespectfully about the United States. 3. The course information was inaccurate (11 evaluations). For example, comments that fell into this category raised concern that an instructor provided misinformation about dressing norms for Middle Eastern women and that an instructor cited incorrect information about a criminal case discussed during the class. The concerns that were raised varied across different training providers and, although few, most of the concerns stemmed from the evaluation records documenting feedback from DOJ SLATT Program and FBI National Joint Terrorism Task Force Program participants. See appendix IV for additional details on the types of concerns by training provider. DOJ and DHS officials who oversee these training programs indicated that they review the feedback participants provide and assess if it warrants action. However, these officials stated that determining how to respond to feedback can be difficult when the feedback is subjective or not actionable. For example, the SLATT Program Director stated that if a comment simply says “one-sided information,” he cannot take action on it because he does not know which side the person is referring to or what the person thinks should be changed. However, if there is a trend in clear feedback participants provided, he will take action. Further, according to SLATT and Office for Civil Rights and Civil Liberties officials, perceptions regarding what is biased vary by audience and even by the participants within a given audience. Therefore, DHS and DOJ officials stated that they take action to address participant feedback on a case-by-case basis, as they and their staff deem appropriate. For example, the SLATT Director explained that there is no specific threshold to determine whether a participant’s comment warrants further action, but generally, if a similar concern has been submitted by multiple participants, over multiple courses, SLATT officials will review the substance of the comment and devise a plan to correct the issue. For example, the SLATT Director noted that in response to a comment that a course title did not reflect the material taught in the course, he suggested a change to the title. Most of the CVE-related training that DHS and DOJ components provided was formal, classroom-based or curriculum-based training, and the components generally solicited participant feedback for this type of training, which we describe above. In addition, two DOJ components— FBI and USAOs—also provided informal CVE-related training consisting of briefings and presentations at workshops, conferences, and other venues to about 9,900 participants in fiscal years 2010 and 2011. However, these components did not consistently solicit participant feedback for this type of training, which makes it difficult for them to assess the quality of the training, determine whether the training is achieving expected outcomes, and make changes where appropriate. According to FBI officials, training that the FBI centrally administers— including that provided under the National Academy and National Joint Terrorism Task Force programs—is to adhere to the Kirkpatrick model to help ensure its quality. The standards this model prescribes require the solicitation of student feedback. As a result, the FBI collects feedback through evaluations on the formal, classroom-based courses it provides through its National Academy. The FBI does not require entities providing informal training, such as briefings and presentations during outreach, to solicit feedback. Specifically, officials from the FBI’s Office of Public Affairs told us that the bureau does not solicit feedback on presentations, briefings, or its Citizens’ Academy and Community Relations Executive Seminar Training (CREST) outreach programs because doing so is not required, and the officials noted that the FBI does not classify these programs and activities as training. Officials also noted that some field offices, which administer the programs, do solicit feedback from participants although they are not required to do so. For example, 4 of 21 FBI field offices that provided Citizens’ Academy training that was CVE- related according to our framework collected evaluations. However, none of the 3 FBI field offices that provided CREST training or the 5 FBI field offices that provided other training that was CVE-related according to our framework solicited feedback from course participants. Similarly, USAOs are not required to obtain feedback from recipients of training that their individual offices provide. According to Executive Office for U.S. Attorneys officials, USAOs do not typically solicit feedback from participants on the presentations that our framework identified as CVE-related that they provide in their districts, particularly with respect to threat-related briefings for law enforcement officials that are intended to address a particular area of concern for that region at a particular time. Under these circumstances, according to these officials, feedback may be less useful than it would be for curriculum-based trainings, because the presentation is less likely to be repeated for many different audiences. We identified 39 USAOs that provided or facilitated training that was CVE-related according to our framework, excluding training that was facilitated by a USAO, but provided by another federal entity (such as SLATT). Out of these 39 USAOs, 15 collected feedback from CVE-related training participants. We have previously reported that evaluating training is important and that agencies need to develop systematic evaluation processes in order to We obtain accurate information about the benefits of their training. recognize the distinction between formal training programs and less formal training, such as presentations. However, the CREST and Citizens’ Academy programs, other FBI field office initiatives, and USAOs collectively trained about 39 percent (about 9,900) of all training participants in DOJ CVE-related training during fiscal years 2010 and 2011. Soliciting feedback on informal training could help the FBI and USAOs obtain valuable information for determining the extent to which these programs are yielding desired outcomes (e.g., whether the FBI’s Citizens’ Academy is projecting a positive image of the FBI in the communities it serves) as well as complying with the CVE national strategy. Such feedback could also be obtained without incurring significant costs. According to officials at a FBI field office that distributes feedback forms and the DHS official who oversees the Office for Civil Rights and Civil Liberties CVE-related training, agencies can solicit feedback from training participants at minimal cost (e.g., the paper on which the form is distributed and the employee time associated with reviewing the forms), feedback is critical to ensure the training is communicating its intended messages effectively, and soliciting feedback is a worthwhile undertaking given the significant time and resources their offices invest in providing CVE-related training. In addition to the concerns we identified in participant evaluations, individuals and advocacy organizations submitted at least six letters of complaint to DHS, DOJ, the Executive Office of the President, and other federal government entities regarding 18 alleged incidents of biased CVE and counterterrorism training that DHS or DOJ provided or funded during fiscal years 2010 and 2011. Representatives of the advocacy organizations that submitted the letters generally did not participate in the training that generated these concerns. Rather, their concerns were derived from information reported in the media and individuals who attended a training session and expressed concern about the training to the organizations. We determined that 7 of the alleged incidents described in five of the letters were relevant to this review because they pertained to CVE-related training provided to state and local officials and community members, not training that was exclusively provided to federal officials. The 7 incidents described in these letters, some of which the media initially reported, articulated similar concerns as those identified in the participant evaluations we reviewed. That is, the allegations made in the letters raised concerns that course information and instructors were biased, offensive, or inaccurate. Table 2 summarizes the concerns raised in these five letters and the agency’s perspectives about the concerns. Although the number of concerns and complaints raised about CVE- related training may have been small, according to DHS and DOJ officials, the departments have generally considered the complaints as serious issues that warranted action to better ensure the quality of future training, particularly given the negative effects that such incidents can have on the departments’ reputations and trust with the communities they serve. For example, according to the DHS Principal Deputy Counterterrorism Coordinator, developing CVE-related training is a priority for the department because inappropriate and inaccurate training undermines community partnerships that are critical to preventing crime and negatively impacts efforts of law enforcement to identify legitimate behaviors and indicators of violent extremism. DOJ has undertaken quality reviews of existing training materials that are CVE-related according to our framework, and both DOJ and DHS have developed guidance for CVE-related training and developed other quality assurance mechanisms for this training. DOJ components have conducted or are currently conducting internal reviews of their training materials, including those with topics that our framework identified as related to CVE, in an effort to identify and purge potentially objectionable materials. In September 2011, the FBI launched a review of all FBI counterterrorism training materials, including materials that were CVE-related according to our framework. This review included approximately 160,000 pages of training materials, and the FBI determined that less than one percent of the pages contained factually inaccurate or imprecise information or used stereotypes. The Office of the Deputy Attorney General has also ordered a departmentwide review of training materials. Unlike the FBI’s internal review, which focused on counterterrorism training materials, a memorandum issued by the Deputy Attorney General to heads of DOJ components and U.S. Attorneys in September 2011 directed them to carefully review all training material and presentations that their personnel provided. The memorandum stated components particularly should review training related to combating terrorism, CVE, and other subjects that may relate to ongoing outreach efforts in Arab, Muslim, Sikh, South Asian, and other communities. The purpose of the review was to ensure that the material and information presented are consistent with DOJ standards, goals, and instructions. Officials from the four DOJ components that we identified as having provided or funded CVE-related training reported that their components have completed, or intend to complete, the review the Deputy Attorney General ordered. According to DOJ officials, as of August 2012, some components are still reviewing relevant materials and the Deputy Attorney General asked components to provide any questionable training materials to the Deputy Attorney General’s office. DOJ officials also told us that each DOJ component is to make its own determination on what materials are appropriate, but that components are to review all training materials, even if the components do not have specific plans to present the materials in the future. DHS, DOJ, and the FBI have developed guidance to avoid future incidences or allegations of biased or otherwise inappropriate training. In October 2011, the DHS Office for Civil Rights and Civil Liberties issued Countering Violent Extremism Training Guidance & Best Practices (DHS CVE Guidance), which acknowledges that it is important for law enforcement personnel to be appropriately trained in understanding and detecting ideologically motivated criminal behavior and in working with communities and local law enforcement to counter domestic violent extremism.accurate, based on current intelligence, and include cultural competency training. To this end, its goals are to help ensure that (1) trainers are experts and well regarded; (2) training is sensitive to constitutional values; (3) training facilitates further dialogue and learning; (4) training adheres to government standards and efforts; and (5) training and objectives are appropriately tailored, focused, and supported. The guidance provides best practices for federal, state, and local officials organizing CVE, cultural awareness, or counterterrorism training to adhere to in support of these goals. Best practices include reviewing a prospective trainer’s résumé; reviewing the training program to ensure that it uses examples to demonstrate that terrorists and violent extremists vary in ethnicity, race, gender, and religion; and reaching out to sponsors of existing government training efforts for input. The DHS CVE guidance states that training must be Following the release of DHS’s CVE Guidance, FEMA issued an information bulletin to its state, local, and private sector partners and grantees to emphasize the importance of ensuring that all CVE-related training is consistent with DHS and U.S. government policy.referenced the DHS CVE Guidance and stated, among other things, that grant-funded training should avoid the use of hostile, stereotypical, or factually inaccurate information about Muslims and Islam or any community. The bulletin also emphasized the importance of community engagement and interaction to promote communities as part of the solution. According to FEMA officials, if a grantee were to provide CVE- related training and not follow the DHS CVE guidance, DHS may require that the grantee repay any grant funds that were spent on the training. However, several DHS grantees indicated that they would not necessarily know when to apply the best practices for ensuring the quality of CVE- related training described in the informational bulletin. Specifically, of the 30 Homeland Security Grant Program training points of contact who responded to our survey, 18 said that they were not at all clear or only somewhat clear about when to apply the principles in the FEMA bulletin. In addition, 20 said that topics that may be covered during CVE-related training are not at all clear or only somewhat clear in the bulletin. As a result, these grantees could have difficulty in determining when to apply the principles. As previously discussed, the additional efforts DHS is undertaking to educate state administrative agency officials on the principal topics CVE-related training addresses could further enable the officials to fund training that supports the CVE national strategy. These survey results indicate that such educational efforts should help grantees more readily identify topics that may be covered during CVE-related training, and thus more appropriately apply DHS CVE-related training quality assurance guidance. DHS is also developing additional mechanisms to ensure the quality of CVE-related training. Specifically, Counterterrorism Working Group officials told us that in June 2012 DHS established a CVE-related training Working Group within the department to develop a framework to (1) ensure that training DHS components provide meets DHS and the U.S. government’s CVE standards; (2) ensure that grantees using grant funds for training utilize certified trainers; and (3) disseminate DHS training through agency partners, such as the International Association of Chiefs of Police. In July 2012, this working group proposed recommendations for meeting these goals in a memorandum to the DHS Deputy Counterterrorism Coordinator. For example, the group recommended that the department establish and maintain a database of certified CVE instructors and appoint a CVE program coordinator to oversee the instructor vetting and training process. According to Counterterrorism Working Group officials, DHS is working on plans to implement these recommendations. As these recommendations were made recently and DHS has just decided to implement them, it is too early to assess any quality assurance impact they will have on CVE-related training. DOJ also developed guidance applicable to all training, including CVE- related training, conducted or funded by DOJ to help ensure its quality. DOJ formed a working group on training issues chaired by its Civil Rights Division within the Attorney General’s Arab-Muslim Engagement Advisory Group. The working group developed the DOJ training principles to guide DOJ’s training and to ensure that all communities that DOJ serves are respected. In March 2012, the Deputy Attorney General issued a memorandum for DOJ heads of components and USAOs outlining guiding principles to which all training that DOJ conducted or funded must adhere. Specifically, it stated that (1) training must be consistent with the U.S. Constitution and DOJ values; (2) the content of training and training materials must be accurate, appropriately tailored, and focused; (3) trainers must be well qualified in the subject area and skilled in presenting it; (4) trainers must demonstrate the highest standards of professionalism; and (5) training must meet department standards. Also in March 2012, the FBI published The FBI’s Guiding Principles Touchstone Document on Training. This document is intended to be consistent with the March 2012 Deputy Attorney General guidance, but elaborates on each training principle outlined in the document. The FBI’s guidance states that training must (1) conform to constitutional principles and adhere to the FBI’s core values; (2) be tailored to the intended audience, focused to ensure message clarity, and supported with the appropriate course materials; and (3) be reviewed, and trainers must be knowledgeable of applicable subject material. DOJ officials also told us that the department’s guiding principles are meant to memorialize department training standards and values and are the group’s first step for ongoing work to ensure the quality of future counterterrorism and CVE-related training. Although developing these principles marks an important first step, we were unable to assess the extent to which they can help ensure the quality of CVE-related training moving forward because the review is ongoing and DOJ officials are in the process of planning additional efforts. Providing high-quality and balanced CVE-related training is a difficult task given the complexity and sensitivities surrounding the phenomenon of violent extremism. However, misinformation about the threat and dynamics of radicalization to violence can harm security efforts by unnecessarily creating tensions with potential community partners. The CVE national strategy implementation plan commits the federal government, including DHS and DOJ, to supporting state and local partners in their efforts to prevent violent extremism by providing CVE- related training. By identifying and communicating CVE-related training topics, DOJ could better demonstrate the extent to which it is fulfilling departmental CVE-related responsibilities as established in the implementation plan for the CVE national strategy. In addition, by proactively soliciting feedback from participants in informal CVE-related training on a more consistent basis, FBI field offices and USAOs could more effectively obtain information on the strengths and weaknesses of their presentations and briefings, and thus better ensure their quality. To better enable DOJ to demonstrate the extent to which it is fulfilling its CVE-related training responsibilities, we recommend that the Deputy Attorney General identify principal topics that encompass CVE-related training—including training that is directly related to CVE or that has ancillary benefits for CVE—and communicate the topics to DOJ components. To obtain valuable information for determining the extent to which CVE- related programs are yielding the desired outcomes and complying with the CVE national strategy, we recommend that the Deputy Attorney General direct USAOs and the Director of the FBI’s Office of Public Affairs direct FBI field offices to consider soliciting feedback more consistently from participants in informal training, such as presentations and briefings, that covers the type of information addressed in the CVE national strategy. We provided a draft of the sensitive version of this report to DHS, DOJ, ODNI, and DOD for their review and comment. We received written comments from DHS and DOJ, which are reproduced in full in appendixes V and VI, respectively. DHS generally agreed with the findings in its comments, and DOJ agreed with one of the recommendations in this report, but disagreed with the other recommendation. ODNI and DOD did not provide written comments on the draft report. However, ODNI provided technical comments, as did DHS and DOJ, which we incorporated throughout the report as appropriate. In its written comments, DHS noted that the report recognizes DHS’s efforts to develop and improve the quality of CVE training and identified additional efforts that the department is taking to improve communication with its various CVE stakeholders and to implement the priorities outlined in its framework for vetting CVE training. For example, DHS stated that it will be hosting a CVE train-the-trainer workshop in September 2012, and identifying trainers on its online CVE training portal who meet the standards included in DHS’s training guidance and best practices. DHS also stated that it remains committed to improving and expanding its development of CVE resources and providing information about those resources to state and local partners. DOJ stated that it generally agrees with the recommendation that the Deputy Attorney General and the Director of FBI’s Office of Public Affairs direct USAOs and FBI field offices to consider soliciting feedback more consistently from participants in informal training that covers the type of information addressed in the CVE national strategy. The department stated that it will develop a plan of action that describes how USAOs and FBI field offices will implement this recommendation. Developing such a plan should address the intent of our recommendations. DOJ, however, disagreed with the recommendation that the Deputy Attorney General identify principal topics that encompass CVE-related training and communicate those topics to DOJ components. According to DOJ, the CVE national strategy implementation plan assigns DOJ, through its USAOs, primary responsibility for expanding the scope of engagement and outreach events and initiatives that may have direct or indirect benefits for CVE; however, the plan does not assign the department primary responsibility for developing specific CVE-related training. We recognize that DOJ is not the lead agency for the subsection of the implementation plan related to the development of standardized CVE training; however, the CVE implementation plan nonetheless assigns DOJ as a lead or partner agency for other CVE training-related activities. For example, the implementation plan states that the FBI will lead the development of CVE-specific education modules and that DOJ will colead (1) the expansion of briefings about violent extremism for state and local law enforcement and government, and (2) the expansion of briefing efforts to raise community awareness about the threat of radicalization to violence. In addition, the implementation plan directs the FBI to develop a CVE Coordination Office, and according to the FBI, that office is in the process of developing CVE-related training. Given that DOJ has been identified as a lead or partner agency for several training related activities identified in the implementation plan, identifying CVE training topics could help DOJ demonstrate the extent to which it is fulfilling its responsibilities under the CVE national strategy. Identifying CVE training topics could also help the FBI determine what issues it should be addressing in the training that its CVE Coordination Office is developing, and assist the department in being able to publicly account for the CVE-related training that the department provides or funds. DOJ also stated in its comments that the draft report recommended that DOJ redefine its cultural competency training and community outreach efforts (which may have benefits for CVE) as “CVE.” DOJ then stated that redefining these efforts as such would be imprecise and potentially counterproductive, and that labeling these efforts as CVE would suggest that they are driven by security efforts, when they are not. To clarify, the report does not include a recommendation that DOJ redefine or label its cultural competency training and community outreach efforts as CVE. Although we included these topics in the framework we used to identify potentially CVE-related training for the purpose of this review, the recommendation was that DOJ identify principal topics that encompass CVE-related training and communicate such topics to DOJ components. We defer to the department to determine which topics are appropriate to cover in its CVE-related training. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees. We will also send copies to the Secretary of Homeland Security, the Attorney General, the Secretary of Defense, and the Director of National Intelligence. In addition, this report will be made publicly available at no extra charge on the GAO Website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. This report answers the following questions: 1. To what extent have the Department of Homeland Security (DHS) and the Department of Justice (DOJ) identified and communicated topics that countering violent extremism-related (CVE-related) training addresses to their components and state and local partners? 2. What, if any, concerns have been raised by state and local partners who have participated in CVE-related training provided or funded by DHS and DOJ? 3. What actions, if any, have DHS and DOJ taken to improve the quality of CVE-related training? To determine the extent to which DHS and DOJ identified and communicated topics that should be addressed by CVE-related training, we met with officials from both departments to discuss how they define CVE-related training, which departmental training programs were relevant to our review, and how the departments communicated principal CVE- related training topics to relevant components and state and local partners. We then analyzed this information to assess the extent to which the departments’ efforts allow them to demonstrate fulfillment of their CVE-related training responsibilities under the CVE national strategy. We also met with officials from the Department of Defense (DOD) and Office of the Director of National Intelligence (ODNI) who possess knowledge about CVE-related training and who are involved in interagency efforts related to CVE. More specifically, we met with officials from the components and offices listed in table 3. To obtain additional views on CVE-related training provided or funded by DHS or DOJ, we interviewed representatives from nine state and local law enforcement agencies and law enforcement representative organizations involved with federal CVE-related training efforts. They included the Minneapolis Police Department, the Los Angeles Police Department, the Las Vegas Sheriff’s Department, the Arkansas State Police Program, the Dearborn Police Department, the National Sheriff’s Association, the Major City Chief’s Association, the International Association of Law Enforcement Intelligence Analysts, and the National Consortium for Advanced Policing. We selected these agencies and organizations based on their involvement with CVE-related training efforts and the extent to which they collaborate with DHS or DOJ on CVE-related training. While the views of these entities do not represent the views of all agencies and organizations involved in CVE-related training, these entities were able to offer helpful perspectives for the purpose of this review. We also interviewed individuals with expertise in CVE, such as academic researchers who have published on CVE-related topics and researchers from organizations that study CVE-related topics, to obtain their views on topics CVE-related training should address and identify potential training programs to include in our review. They included individuals from the Georgetown University Prince Alwaleed Bin Talal Center for Muslim-Christian Understanding, the RAND Corporation, the Foundation for Defense of Democracies, the International Centre for the Study of Radicalisation, and the National Consortium for the Study of Terrorism and Responses to Terrorism. We selected these individuals based on the depth of their experience with, and knowledge of, CVE; the relevance of their publications; referrals from other practitioners; and to develop a sample that represented various sectors (e.g., academic, advocacy, etc.). They provided valuable insight even though the perspectives they offered are not generalizable. The state administrative agencies that we surveyed are responsible for managing DHS grant awards to states and the District of Columbia that are eligible for CVE-related training and ensuring that grant recipients comply with grant requirements. such as California and Texas, did not. As a result, the experiences of state administrative agencies from some of the larger states may not be captured in our survey results. Nevertheless, the survey results provide insights into the level of clarity about DHS CVE-related guidance for other grantees. To obtain a better understanding of the departments’ CVE-related training responsibilities, we requested information from DOJ and DHS on the approximate number and type of participants that attended training we determined was CVE-related and the estimated cost. We provide additional details on how we classified training as CVE-related below. We assessed the reliability of the training data provided by interviewing agency officials familiar with the data to learn more about the processes used to collect, record, and analyze the data. For example, we found that several training providers collected information on the number and type of participants through sign-in sheets. We used these data to approximate the dollar amount spent by agencies on CVE-related training in appendix III. As described above, we determined that the data were sufficiently reliable for showing general trends in attendance and spending, but some agencies either did not record participant data, and thus could not provide them; did not record participant figures and provided estimates of attendance based on the instructor’s recall; or recorded participant figures, but not the participants’ places of employment, so they could not specify how many of the attendees were from state and local versus federal entities. We noted these instances in our report. During our initial interviews with DHS and DOJ, officials expressed difficulty in responding to our request for CVE-related training materials, in part because agency officials were not clear on which training should be considered CVE-related. To facilitate our request for course materials for CVE-related training, we developed a framework to classify training as CVE-related based on our review and analysis of information from the following sources: (1) federal strategies related to violent extremism, such as Empowering Local Partners to Prevent Violent Extremism in the United States and its associated implementation plan;reports, or strategies that address CVE-related training topics such as DHS’s CVE-related training Guidance and Best Practices; and (3) perspectives provided by individuals with CVE expertise. Specifically, we conducted a content analysis of our transcripts of interviews with experts and CVE-related documents to determine the current understanding of the content areas covered by CVE-related training and the knowledge state and local officials should possess or principles they should understand to effectively carry out CVE efforts. We then analyzed this information to identify similar themes and principles across the sources and grouped them together into three distinct content areas CVE-related training likely addresses: (2) DHS and DOJ plans, 1. Radicalization addresses approaches that are based on research and accurate information to understanding the threat radicalization poses, how individuals may become radicalized, how individuals seek to radicalize Americans (threat of violent extremist recruitment), behaviors exhibited by radicalized individuals, or what works to prevent radicalization that results in violence. 2. Cultural competency seeks to enhance state and local law enforcement’s understanding of culture or religion, and civil rights and civil liberties, or their ability to distinguish, using information driven and standardized approaches, between violent extremism and legal behavior. 3. Community engagement addresses ways to build effective community partnerships, such as through outreach, and community capacity for the purpose of, among other things, mitigating threats posed by violent extremism. We solicited feedback on this framework from DHS and DOJ. DHS Counterterrorism Working Group officials generally agreed with the content areas we identified, and we incorporated feedback the group provided, as appropriate. DOJ officials stated that they view the framework as reasonable for the purpose of our review. For this review, we considered CVE-related training to include instruction, presentations, briefings, or related outreach efforts conducted, sponsored, promoted, or otherwise supported by DOJ, DHS, or a respective component, to help state, local, or tribal entities related to the three aforementioned content areas. We asked DHS and DOJ to identify and provide all course materials for any courses that they provided or funded during fiscal years 2010 and 2011 through grant programs for state and local entities, including law enforcement officers and community members, assumed to be CVE- related based on GAO’s framework. We focused generally on training provided in fiscal years 2010 and 2011 because “countering violent extremism” is a relatively nascent term. In addition, we focused on training provided to state and local entities because the CVE national strategy emphasizes the importance of providing CVE-related training to these entities. While the FBI identified its National Academy as providing training that could be considered CVE-related, it did not identify any of its other programs as germane to our review. However, complaint letters raised concerns about FBI training that was CVE-related according to our framework that was provided through two other FBI programs— the Citizens’ Academy and the National Joint Terrorism Task Force. We assessed some of the training provided through these programs and determined the training to be CVE-related according to our framework. In addition, the FBI’s internal review of counterterrorism training, which included the FBI programs within the scope of our review, assessed the training materials against criteria for CVE-related training, thereby suggesting that these programs may have provided training that was CVE-related. Accordingly, we requested course materials on these programs, as well as the Community Relations Executive Seminar Training Program, which is an abbreviated version of the Citizens’ Academy. We received approximately 290 presentations, briefings, and course materials from two components within DHS and four within DOJ. In some cases, DHS and DOJ offices provided us only with course abstracts or agendas instead of the full presentations or course materials because (1) they contracted the training with an outside provider and did not retain all of the associated training materials or (2) the training materials were particularly voluminous and, on the basis of discussions with the offices, we agreed that the course abstracts or agendas would enable us to sufficiently determine the relevancy of the training to our review. In those cases, we determined CVE-relevancy based on the agenda or abstract alone. We reviewed these training materials to assess whether each of the individual courses, presentations, briefings, and other training-related activities undertaken or funded by DHS and DOJ agencies addressed one or more of the three content areas described above. If they addressed any of these content areas, we considered them CVE-related, even if the primary focus of the materials was not CVE-related. To ensure consistency in our analysis, two analysts independently reviewed the materials for each training and recorded their assessment of whether the training addressed each content area. Any discrepancies in the initial determinations were then discussed and reconciled. To determine what concerns, if any, participants raised about CVE- related training, we reviewed course evaluations completed by participants of CVE-related training offered by DHS I&A, DHS Office for Civil Rights and Civil Liberties, DOJ BJA, and the FBI, and identified complaints or concerns about CVE-related training made formally in writing. We limited our analysis to training that was provided or funded by DHS or DOJ during fiscal years 2010 or 2011 and provided to a state or local entity (e.g., police department, community group, or fusion center). Two analysts independently reviewed 8,424 course evaluations from six training programs to consistently determine which ones included concerns or complaints. The analysts also assessed the nature of the concerns and complaints and assigned each complaint to one of three categories: (1) politically or culturally biased, (2) offensive, or (3) inaccurate. Where there were discrepancies between the analysts, they were resolved through supervisory review. To identify formally submitted or documented complaints or concerns participants expressed, we asked DHS and DOJ to identify those submitted in writing to DHS or DOJ, or articulated to DHS or DOJ through other means but subsequently documented by the agency, from fiscal years 2010 through 2011. We also conducted keyword searches using LexisNexis and Google to identify concerns that were raised by either individuals or advocacy groups that were submitted in writing to DHS or DOJ. In addition, we interviewed representatives, including leaders, of select advocacy groups that raised concerns about CVE-related training to identify what concerns and complaints, if any, they submitted in writing to DHS or DOJ on behalf of training participants. The advocacy and civil liberties organizations we interviewed included the American Civil Liberties Union, the American-Arab Anti-Discrimination Committee, the Council on American Islamic Relations, and the Muslim Public Affairs Council. We selected these organizations based on their leadership in raising concerns we identified (e.g., by virtue of being the primary signatories) and upon the recommendation of other advocacy groups. These interviews also enabled us to confirm or obtain additional views on the formally documented complaints DHS or DOJ provided. Through these approaches, we identified a total of six letters of complaint regarding 18 alleged incidents of biased CVE and counterterrorism training that DHS or DOJ provided or funded during fiscal years 2010 and 2011. Given that the scope of this review is limited to CVE-related training provided to state and local officials and community members, and not training that is exclusively provided to federal officials, we determined that 7 of the alleged incidents described in five of the letters were relevant to this review. We also interviewed relevant DHS and DOJ officials to obtain their perspectives on the concerns raised in the written complaints and information on any actions agencies took in response to these incidents. To address what actions, if any, DHS and DOJ have taken overall to improve the quality of CVE-related training, we interviewed DHS and DOJ officials responsible for providing or funding CVE-related training to inquire about any current or pending guidance, whether documented or undocumented, they adhere to when vetting training materials and instructors and other actions they have taken to ensure the quality of CVE-related training. We reviewed relevant DHS and DOJ documents including recently released guidance and best practices for training that DHS, DOJ, and the FBI developed. We also analyzed FBI and DOJ data from training reviews and information on how DHS and DOJ review and vet training curricula and instructors. Specifically, we analyzed the counterterrorism training materials that the FBI determined were inappropriate as a result of its internal review, which the FBI undertook to identify and purge potentially objectionable training materials. This analysis enabled us to better understand the review results with regard to training materials that were CVE-related under our framework, and provided context for the quality assurance steps FBI has taken in response to the review. To focus our analysis on training materials included in the FBI’s review that were CVE-related, one analyst assessed which of these training materials were CVE-related, according to our framework, and if the materials were CVE-related, the analyst entered the FBI’s observations and additional data about that training into a data collection form. A second analyst then reviewed these results. When there was disagreement, the two reviewers discussed the material, reached agreement, and modified the entries as necessary to ensure concurrence regarding which of the training materials included in the FBI’s review were germane to our review. The FBI considers the methodology it used to conduct its internal review and our analysis of the training materials that the FBI considered objectionable to be For Official Use Only; therefore, we did not include that information in this report. In addition, we conducted a site visit in San Diego, California, in January 2012, where DHS hosted a pilot of a CVE-related course under development. During the site visit, we observed the pilot training, and interviewed DHS officials who were sponsoring the training and local agencies that had developed and delivered the course curriculum. On the basis of the information we collected, we evaluated DHS’s adherence to its own CVE-related training guidance. We also assessed DHS and DOJ guidance and actions related to guidance provided by departmental leadership, such as DOJ training guidance issued to its components. We conducted this performance audit from October 2011 through October 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. DHS is currently working with its components and relevant state and local entities to develop and implement CVE-focused training for state and local law enforcement officers, state police academy recruits, correctional facility officers, and new federal law enforcement officers. DHS’s Principal Deputy Counterterrorism Coordinator, who heads the department’s CVE efforts, has testified that developing CVE-related training is a priority for the department because inappropriate or inaccurate training undermines community partnerships and negatively affects efforts of law enforcement to identify legitimate behaviors and indicators of violent extremism. DHS has determined that CVE-related training should address: violent extremism (e.g., the threat it poses), cultural demystification (e.g., education on culture and religion), community partnerships (e.g., how to build them), and community policing efforts (e.g., how to apply community policing efforts to CVE). Accordingly, the DHS Counterterrorism Working Group, which is overseen by the Principal Deputy Counterterrorism Coordinator, is developing training that addresses these topics. These trainings include the following: A continuing education CVE curriculum for frontline and executive state and local law enforcement that DHS is developing with the Los Angeles Police Department, Major Cities Chiefs Association (MCC), and the National Consortium for Advanced Policing (NCAP). DHS hosted a first pilot for this course in San Diego, California, in January 2012 that 45 state and local law enforcement officials attended. The pilot consisted of 3 days of classroom instruction and student participation activities. According to Counterterrorism Working Group officials, DHS held a second pilot in the National Capital Region in July 2012, and a third pilot in Minneapolis, Minnesota, in August 2012. In July 2012, DHS also presented the curriculum at a CVE conference it hosted in Washington, D.C., and according to Counterterrorism Working Group officials, the department is working to enhance the curriculum based on feedback that conference attendees provided. MCC has passed a motion to adopt the curriculum, which DHS aims to implement in collaboration with state and local partners in 2013. CVE-related training modules for state police academies, which DHS is developing in collaboration with the International Association of Chiefs of Police (IACP). These training modules will be 1 to 2 hours in length, and are intended for police recruits. DHS plans for police academies to introduce the modules into their training and to make them available online for police recruits by the end of 2012. A CVE awareness training for correctional facility, probation, and patrol officers at the state and local levels that DHS is working to develop in collaboration with the Bureau of Prisons, the FBI National Joint Terrorism Task Force, and the Interagency Threat Assessment Coordination Group. Counterterrorism Working Group officials reported that DHS completed pilots for this training in Maryland in March 2012 and in California in July 2012. FEMA is also developing a curriculum for rural correctional facility management. Further, according to DHS officials, the Federal Law Enforcement Training Center has finalized a CVE-related training course that it integrated into its existing training for recruits. In February 2012, DHS hosted a symposium on the curriculum, and as of July 2012, FLETC had taught the curriculum to about 190 students. In addition, according to DHS officials, FLETC is also in the process of integrating aspects of the DHS Office for Civil Rights and Civil Liberty’s cultural competency training, which is described in detail in appendix III, into all new CVE curriculum and training efforts. Within DOJ, the FBI is also developing CVE-related training. The CVE national strategy implementation plan tasks FBI with establishing a CVE Coordination Office that will, as part of its activities, coordinate with the National Task Force on CVE-specific education and awareness modules. According to FBI officials, the FBI established a CVE office in January 2012, and as of August 2012, had assigned staff to the office and was in the process of developing CVE-related training modules. In particular, the CVE Office developed and presented a CVE-related training module to FBI public affairs specialists and community outreach coordinators and specialists in FBI field offices from April through August, 2012, according to FBI officials. FBI officials also reported that the CVE Office is collaborating with the FBI Counterterrorism Division to develop a CVE-related training module for FBI special agents and mid- and senior- level managers that it plans to complete in December 2012 and implement in early 2013. DOJ and DHS components provided training that was CVE-related according to our framework to more than 28,000 state and local entities, including law enforcement officials, fusion center personnel, and community members, during fiscal years 2010 and 2011. That is, DOJ and DHS components provided training, including courses, briefings, presentations, and workshops, that addressed one or more of the three CVE-related training topical areas we identified: (1) the phenomenon of violent extremism and the threat posed by radicalization that leads to violence; (2) cultural competency and how to distinguish between criminal and constitutionally protected cultural and religious behaviors; and (3) how to build effective community partnerships to, among other things, mitigate threats posed by violent extremism. The majority of these trainings did not have the term “CVE” in their titles, a fact that DOJ and DHS officials attributed to CVE being a relatively new concept, or that the trainings had been developed for purposes other than CVE. Nonetheless, they provided some instruction on at least one of the three CVE-related training topics we identified, and thus are considered CVE-related for the purpose of this review. Although the CVE-related trainings that DOJ and DHS provided collectively addressed all three CVE-related training topics, the trainings more frequently addressed the phenomenon of violent extremism and cultural competency than community engagement. The specific topics addressed by each training DOJ and DHS components provided during fiscal years 2010 and 2011 are described in the tables that follow. In addition, the DOJ grant-funded State and Local Anti- Terrorism Training (SLATT) Program provided CVE-related training to approximately 11,000 state and local law enforcement officials. Within DOJ, the FBI, CRS, and U.S. Attorneys’ Offices (USAO) provided CVE-related training directly to state and local entities during fiscal years 2010 and 2011. In total, these entities provided CVE-related training to more than 15,000 state and local law enforcement and community members. More specifically, the FBI National Academy, the FBI National Joint Terrorism Task Force (NJTTF) Program, select FBI field offices, CRS, and about half of USAOs (48 of 93 offices) provided CVE-related training to law enforcement. In addition, the FBI’s Citizens’ Academy and Community Relations Executive Seminar Training (CREST) outreach programs provided CVE-related training to community members. Tables 4, 5, and 6 provide more detailed information on these programs and trainings. Although we determined that CRS provided CVE-related training according to our framework, CRS officials emphasized that the service’s mission does not include any national security, counterterrorism, or CVE- related training efforts. CRS works with communities to help address tension associated with allegations of discrimination on the basis of race, color, or national origin. CRS also works with communities to develop strategies to prevent and respond more effectively to alleged violent hate crimes on the basis of race, color, national origin, gender, gender identity, sexual orientation, religion, or disability. According to CRS officials, through its work preventing hate crimes, CRS helps develop relationships among Arab, Muslim, and Sikh communities who may be targeted for hate violence by violent extremists, including supremacists, and other community members, as well as local government and law enforcement officials. As a result, CRS does not conduct activities or programs with the express goal of CVE, but recognizes that its ability to help promote dialogue and develop strong relationships to create a sense of inclusion in communities may have ancillary CVE benefits in preventing violent extremism. Within DHS, the Office for Civil Rights and Civil Liberties Institute and I&A provided CVE-related training to approximately 3,410 state and local intelligence and law enforcement officials during fiscal years 2010 and 2011. This training consisted of two classroom-based courses that the Office for Civil Rights and Civil Liberties Institute provided on about 40 occasions; one CVE-focused workshop that the I&A State and Local Program Office hosted; and 17 briefings that the I&A Homegrown Violent Extremism Branch (HVEB) provided in coordination with the FBI and NCTC at fusion centers and fusion center conferences. Table 7 provides more detailed information on each of these trainings. DOJ and DHS also administered four grant programs during fiscal years 2010 and 2011 that provided funding for which CVE-related training was an eligible expense: (1) the DOJ Community Policing Development (CPD) Program, (2) the DOJ Edward Byrne Memorial Justice Assistance Grant (JAG) Program, (3) the DHS Homeland Security Grant Program (HSGP), and (4) the DOJ SLATT Program. We reviewed grant documentation for CPD grant projects that DOJ identified as potentially CVE-related and determined that they were not used to pay for training that was CVE- related according to our framework. Information DHS and DOJ collect on grant projects funded through the HSGP and JAG programs suggests that minimal, if any, funds from these programs were used for CVE- related training purposes; however, the level of detail in the information the departments collect from HSGP and JAG grantees is not sufficient to reliably and conclusively make this determination. In fiscal years 2010 and 2011, SLATT provided CVE-related training to approximately 11,000 state and local officials. Additional details regarding this training are provided in table 8. Table 9 presents a summary of the 77 state and local participant concerns that we identified during our review of course evaluation forms that DHS and DOJ provided to us. In addition to the contact named above, Kristy N. Brown, Assistant Director, and Taylor Matheson, Analyst-In-Charge, managed this assignment. Melissa Bogar and Lerone Reid made significant contributions to this report. Gustavo Crosetto, Pamela Davidson, Richard, Eiserman, Eric Hauswirth, Thomas Lombardi, Linda Miller, Jan Montgomery, and Anthony Pordes also provided valuable assistance.
DHS and DOJ have responsibility for training state and local law enforcement and community members on how to defend against violent extremism--ideologically motivated violence to further political goals. Community members and advocacy organizations have raised concerns about the quality of some CVE-related training that DOJ and DHS provide or fund. As requested, GAO examined (1)the extent to which DHS and DOJ have identified and communicated topics that CVE-related training should address to their components and state and local partners, (2) any concerns raised by state and local partners who have participated in CVE-related training provided or funded by DHS or DOJ, and (3) actions DHS and DOJ have taken to improve the quality of CVE-related training. GAO reviewed relevant documents, such as training participant feedback forms and DHS and DOJ guidance; and interviewed relevant officials from DHS and DOJ components. This is a public version of a sensitive report that GAO issued in September 2012. Information that the FBI deemed sensitive has been redacted. The Department of Homeland Security (DHS) has identified and is communicating to its components and state and local partners topics that the training on countering violent extremism (CVE) it provides or funds should cover; in contrast, the Department of Justice (DOJ) has not identified what topics should be covered in its CVE-related training. According to a DHS official who leads DHS's CVE efforts, identifying topics has helped to provide a logical structure for DHS's CVE-related training efforts. According to DOJ officials, even though they have not specifically identified what topics should be covered in CVE-related training, they understand internally which of the department's training is CVE-related and contributes either directly or indirectly to the department's training responsibilities under the CVE national strategy. However, over the course of this review, the department generally relied upon the framework GAO developed for potential CVE-related training topics to determine which of its existing training was CVE-related. Further, because DOJ has not identified CVE-related training topics, DOJ components have had challenges in determining the extent to which their training efforts contribute to DOJ's responsibilities under the CVE national strategy. In addition, officials who participated in an interagency working group focusing on ensuring CVE-related training quality stated that the group found it challenging to catalogue federal CVE-related training because agencies' views differed as to what CVE-related training includes. The majority of state and local participant feedback on training that DHS or DOJ provided or funded and that GAO identified as CVE-related was positive or neutral, but a minority of participants raised concerns about biased, inaccurate, or offensive material. DHS and DOJ collected feedback from 8,424 state and local participants in CVE-related training during fiscal years 2010 and 2011, and 77--less than 1 percent--provided comments that expressed such concerns. According to DHS and DOJ officials, agencies used the feedback to make changes where appropriate. DOJ's Federal Bureau of Investigation (FBI) and other components generally solicit feedback for more formal, curriculum-based training, but the FBI does not require this for activities such as presentations by guest speakers because the FBI does not consider this to be training. Similarly, DOJ's United States Attorneys' Offices (USAO) do not require feedback on presentations and similar efforts. Nevertheless, FBI field offices and USAOs covered about 39 percent (approximately 9,900) of all participants in DOJ CVE-related training during fiscal years 2010 and 2011 through these less formal methods, yet only 4 of 21 FBI field offices and 15 of 39 USAOs chose to solicit feedback on such methods. GAO has previously reported that agencies need to develop systematic evaluation processes in order to obtain accurate information about the benefits of their training. Soliciting feedback for less formal efforts on a more consistent basis could help these agencies ensure their quality. DOJ and DHS have undertaken reviews and developed guidance to help improve the quality of CVE-related training. For example, in September 2011, the DOJ Deputy Attorney General directed all DOJ components and USAOs to review all of their training materials, including those related to CVE, to ensure they are consistent with DOJ standards. In addition, in October 2011, DHS issued guidance that covers best practices for CVE-related training and informs recipients of DHS grants who use the funding for training involving CVE on how to ensure high-quality training. Since the departments' reviews and efforts to implement the guidance they have developed are relatively new, it is too soon to determine their effectiveness. GAO recommends that DOJ identify and communicate principal CVE-related training topics and that FBI field offices and USAOs consider soliciting feedback more consistently. DOJ agreed that it should more consistently solicit feedback, but disagreed that it should identify CVE training topics because DOJ does not have primary responsibility for CVE-related training, among other things. GAO believes this recommendation remains valid as discussed further in this report.
You are an expert at summarizing long articles. Proceed to summarize the following text: Prior to enactment of the Food and Drug Administration Modernization Act of 1997 (FDAMA), which first established incentives for conducting pediatric drug studies in the form of additional market exclusivity, few drugs were studied for pediatric use. As a result, there was a lack of information on optimal dosage, possible side effects, and the effectiveness of drugs for pediatric use. For example, while physicians typically had determined drug dosing for children based on their weight, pediatric drug studies conducted under FDAMA showed that in many cases this was not the best approach. To continue to encourage pediatric drug studies, BPCA was enacted on January 4, 2002, just after the pediatric exclusivity provisions of FDAMA expired on January 1, 2002. BPCA reauthorized and enhanced the pediatric exclusivity provisions of FDAMA. Like FDAMA, BPCA allows FDA to grant drug sponsors pediatric exclusivity—6 months of additional market exclusivity—in exchange for conducting and submitting reports on pediatric drug studies. The goal of the program is to develop additional health information on the use of such drugs in pediatric populations so they can be administered safely and effectively to children. This incentive is similar to that provided by FDAMA; however, BPCA provides additional mechanisms to provide for pediatric studies of drugs that drug sponsors decline to study. The process for initiating pediatric studies under BPCA formally begins when FDA issues a written request to a drug sponsor to conduct pediatric drug studies for a particular drug. FDA may issue a written request after it has reviewed a proposed pediatric study request from a drug sponsor, in which the drug sponsor describes the pediatric drug study or studies it proposes doing in return for pediatric exclusivity. In deciding whether to approve the proposed pediatric study request and issue a written request, FDA must determine if the proposed studies will produce information that may result in health benefits for children. Alternatively, FDA may determine on its own that there is a need for more research on a drug for pediatric use and issue a written request without having received a proposed pediatric study request from the drug sponsor. A written request outlines, among other things, the nature of the pediatric drug studies that the drug sponsor must conduct in order to qualify for pediatric exclusivity and a time frame by which those studies should be completed. When a drug sponsor accepts the written request and completes the pediatric drug studies, it submits reports to FDA describing the studies and the study results. BPCA specifies that FDA generally has 90 days to review the study reports to determine whether the pediatric drug studies met the conditions outlined in the written request. If FDA determines that the pediatric drug studies conducted by the drug sponsor were responsive to the written request, it will grant a drug pediatric exclusivity regardless of the study findings. Figure 1 illustrates the process under BPCA. To further the study of drugs when drug sponsors decline a written request, BPCA includes two provisions that did not exist under FDAMA. First, if a drug sponsor declines to conduct the pediatric drug studies requested by FDA for an on-patent drug, BPCA provides for FDA to refer the study of that drug to FNIH, which might then agree to fund the studies. Second, if a drug sponsor declines a request to study an off-patent drug, BPCA provides for referral of the study to NIH for funding. FDA cannot extend pediatric exclusivity in response to written requests for any drugs for which the drug sponsor declined to conduct the requested pediatric drug studies. When drug sponsors decline written requests for studies of on-patent drugs, BPCA provides for FDA to refer the study of those drugs to FNIH for funding, when FDA believes that the pediatric drug studies are still warranted. FNIH, which was authorized by Congress to be established in 1990, is guided by a board of directors and began formal operations in 1996 to support the mission of NIH and advance research by linking private sector donors and partners to NIH programs. Although FNIH is a nonprofit corporation that is independent of NIH, FNIH and NIH collaborate to fund certain projects. FNIH has raised approximately $300 million from the private sector over the past 10 years to support four general types of projects: (1) research partnerships; (2) educational programs and projects for fellows, interns, and postdoctoral students; (3) events, lectures, conferences, and communication initiatives; and (4) special projects. Included in these funds is $4.13 million that FNIH raised as of December 2005 to fund pediatric drug studies under BPCA. The majority of FNIH’s funds are restricted by donors for specific projects and cannot be reallocated. In recent years, appropriations of $500,000 were authorized to FNIH annually. To further the study of off-patent drugs, NIH—in consultation with FDA and other experts—develops a list of drugs, including off-patent drugs, which the agency believes are in need of study in children. NIH lists these drugs annually in the Federal Register. FDA may issue written requests for those drugs on the list that it determines to be most in need of study. If the drug sponsor declines or fails to respond to the written request, NIH can contract for, and fund the conduct of, the pediatric drug studies. These pediatric drug studies could then be conducted by qualified universities, hospitals, laboratories, contract research organizations, federally funded programs such as pediatric pharmacology research units, other public or private institutions or individuals. Drug sponsors generally decline written requests for off-patent drugs because the financial incentives are considerably limited. (See app. II for a description of federal efforts to encourage research on drugs for children less than 1 month of age and app. III for NIH efforts to support pediatric drug studies.) Pediatric drug studies often reveal new information about the safety or effectiveness of a drug, which could indicate the need for a change to its labeling. Generally, the labeling includes important information for health care providers, including proper uses of the drug, proper dosing, and possible adverse effects that could result from taking the drug. FDA may determine that the drug is not approved for use by children, which would be reflected in any labeling changes. According to FDA officials, in order to be considered for pediatric exclusivity, a drug sponsor typically submits results from pediatric drug studies in the form of a “supplemental new drug application.” BPCA specifies that study results, when submitted as part of a supplemental new drug application, are subject to FDA’s performance goals for a scientific review, which in this case is 180 days. FDA’s processes for reviewing study results submitted under BPCA for consideration of labeling changes are not unique to BPCA. These are the same processes the agency would use to review any drug study results in consideration of labeling changes. FDA’s action on the application can include approving the application, determining that the application is approvable (pending the submission of additional information from the sponsor), or determining that the application is not approvable. If studies demonstrate that an approved drug is not safe or effective for pediatric use, this information would be reflected in the drug’s labeling. With a determination that the application is approvable, FDA communicates to the drug sponsor that some issues need to be resolved before the application can be approved and describes what additional work is necessary to resolve the issues. This might require that drug sponsors conduct additional analyses. However, this communication would complete the scientific review cycle. When a drug sponsor resubmits the application with the additional analyses, a new scientific review cycle begins. As a result, multiple scientific review cycles might be necessary, increasing the time between initial submission of the application, which includes the pediatric study reports, and approval of a labeling change. If, during FDA’s review of the study report submitted as part of the application, the agency determines that the application is approvable and the only unresolved issue is labeling, FDA and the drug sponsor must attempt to reach agreement on labeling changes within 180 days after the application is submitted to FDA. If FDA and the drug sponsor cannot reach agreement, FDA must refer the matter to its Pediatric Advisory Committee, which would convene and provide recommendations to the Commissioner on the appropriate changes to the drug’s labeling. The Commissioner would then consider the committee’s recommendations in making the final determination on the proper labeling. Most of the on-patent drugs for which FDA requested pediatric drug studies under BPCA were being studied, but no studies resulted when the requests were declined by drug sponsors. Of the 214 on-patent drugs for which FDA requested pediatric drug studies from January 2002 through December 2005, drug sponsors agreed to study 173 (81 percent). Of the 41 on-patent drugs that drug sponsors declined to study, FDA referred 9 to FNIH for funding and the foundation had not funded any of those studies as of December 2005. From January 2002 through December 2005, FDA issued 214 written requests for on-patent drugs to be studied under BPCA, and drug sponsors agreed to conduct pediatric drug studies for 173 (81 percent) of those. The remaining 41 written requests were declined. (See app. IV for details about the study of off-patent drugs under BPCA and app. V for a detailed description of the status of all written requests issued by FDA.) Drug sponsors completed pediatric drug studies for 59 of the 173 accepted written requests—studies for the remaining 114 written requests were ongoing—and FDA made a pediatric exclusivity determination for 55 of those through December 2005. Of those 55 written requests, 52 (95 percent) resulted in FDA granting pediatric exclusivity. Figure 2 shows the status of written requests issued under BPCA for the study of on-patent drugs, from January 2002 through December 2005. (See app. VI for a description of the complexity of pediatric drug studies conducted under BPCA.) Under BPCA, when a written request to study an on-patent drug is declined, the study of the drug may be referred to FNIH. However, FNIH is limited in its ability to fund drug studies by its available funds. Through December 2005, drug sponsors declined written requests issued under BPCA for 41 on-patent drugs. FDA referred 9 of these 41 written requests (22 percent) to FNIH for funding. FNIH had not funded the study of any of these drugs. NIH has estimated that the cost of studying the drugs that were referred to FNIH for study would exceed $43 million (see table 1). FNIH has been raising funds for the study of drugs referred under BCPA at a rate of approximately $1 million per year. Most drugs—about 87 percent—that have been granted pediatric exclusivity under BPCA have had labeling changes as a result of the pediatric drug studies conducted under BPCA. Pediatric drug studies conducted under BPCA showed that children may have been exposed to ineffective drugs, ineffective dosing, overdosing, or side effects that were previously unknown. However, the process for reviewing study results and completing labeling changes was sometimes lengthy, particularly when FDA required additional information to support the changes. Of the 52 drugs studied and granted pediatric exclusivity under BPCA from January 2002 through December 2005, 45 (about 87 percent) had labeling changes as a result of the pediatric drug studies. FDA officials told us that labeling changes were not made for the remaining 7 (about 13 percent) drugs granted pediatric exclusivity, generally because data provided by the pediatric drug studies did not support labeling changes. In addition, 3 other drugs had labeling changes prior to FDA making a decision on granting pediatric exclusivity. FDA officials said these labeling changes were made prior to determining whether pediatric exclusivity should be granted because the pediatric drug studies provided important safety information that should be reflected in the labeling without waiting until the full study results were submitted or pediatric exclusivity was determined. Pediatric drug studies conducted under BPCA have shown that the way that some drugs were being administered to children potentially exposed them to an ineffective therapy, ineffective dosing, overdosing, or previously unknown side effects—including some that affect growth and development. The labeling for these drugs was changed to reflect these study results. Table 2 shows some of these drugs and illustrates these types of labeling changes. FDA officials said that the agency has been working to increase the amount of information included in drug labeling, particularly when pediatric drug studies indicate that an approved drug may not be safe or effective for pediatric use. Other drugs have had labeling changes indicating that the drug may be used safely and effectively by children in certain dosages or forms. Typically, this resulted in the drug labeling being changed to indicate that the drug was approved for use by children younger than those for whom it had previously been approved. In other cases, the changes reflected a new formulation of a drug, such as a syrup that was developed for pediatric use, or new directions for preparing the drug for pediatric use were identified during the pediatric drug studies conducted under BPCA. (See table 3 for examples of drugs with this new type of information.) Although FDA generally completed its first scientific review of study results submitted as a supplemental new drug application—including consideration of labeling changes—within its 180-day goal, the process for completing the review, including obtaining sufficient information to support and approve labeling changes, sometimes took longer. For the 45 drugs granted pediatric exclusivity that had labeling changes, it took an average of almost 9 months after study results were first submitted to FDA for the sponsor to submit and the agency to review all of the information it required and agree with the drug sponsor to approve the labeling changes. For 13 drugs (about 29 percent), FDA completed this scientific review process and FDA approved labeling changes within 180 days. It took from 181 to 187 days to complete the scientific review process and to approve labeling changes for 14 drugs (about 31 percent). For the remaining 18 drugs (about 40 percent), it took from 238 to 1,055 days for FDA to complete the scientific review process and approve labeling changes. For 7 of those drugs, it took more than a year to complete the scientific review process and approve labeling changes. To determine whether and how drug labeling should be changed, FDA conducts a scientific review of the study results that are submitted to the agency by the drug sponsor. Included with the study results is the drug sponsor’s proposal for how the labeling should be changed. FDA can either accept the proposed wording or propose alternative wording. For some drugs, however, the process does not end with FDA’s first scientific review. While the first scientific reviews were generally completed within 180 days, for the 18 drugs that took 238 days or more, FDA determined that it needed additional information from the drug sponsors in order to be able to approve the applications. This often required that the drug sponsors conduct additional analyses or pediatric drug studies. FDA officials said they could not approve any changes to drug labeling until the drug sponsors provided this information. When FDA completed its review of the information that was originally submitted and requested additional information from the drug sponsors, the initial 180-day scientific review ended. A new 180-day scientific review began when the drug sponsors submitted the additional information to FDA. Drug sponsors sometimes took as long as 1 year to gather the additional necessary data and respond to FDA’s requests. This time did not count against FDA’s 180-day goal to complete its scientific review and approve labeling changes because a new 180-day scientific review begins after the required information is submitted. However, we counted the total number of days between submission of study reports and approval of labeling changes. FDA considers itself in conformance with its review goals even though the entire process may take longer than 180 days. BPCA provides a dispute resolution process to be used if FDA and the drug sponsor cannot reach agreement on labeling changes within 180 days of when FDA received the application and the only issue holding up FDA approval is the wording of the drug labeling. However, FDA officials said they have never used this process because labeling has never been the only unresolved issue for those applications whose review period exceeded 180 days. Agency officials told us that the possibility of referral to the Pediatric Advisory Committee facilitates its negotiations with drug sponsors on labeling changes because it is something that drug sponsors want to avoid. Reminding the drug sponsors that such a process exists has motivated drug sponsors to complete labeling change negotiations by reaching agreement with FDA. (See app. VII for a discussion of strengths of BPCA identified by FDA and NIH, as well as suggestions for ways to improve BPCA.) Drugs were studied under BPCA for their safety and effectiveness in treating children for a wide range of diseases, including some that are common, serious, or life threatening. We found that the drugs studied under BPCA represented more than 17 broad categories of disease. The category that had the most drugs studied under BPCA was cancer, with 28 drugs. In addition, there were 26 drugs studied for neurological and psychiatric disorders, 19 for endocrine and metabolic disorders, 18 related to cardiovascular disease—including drugs related to hypertension, and 17 related to viral infections. Written requests for some types of drugs were more frequently declined by the drug sponsor than others. For example, 36 percent of written requests for pulmonary drugs and 41 percent of written requests for drugs that treat nonviral infection were declined. In contrast, 19 percent of written requests were declined overall. Some of the drugs studied under BPCA were for the treatment of diseases that are common, including those for the treatment of asthma and allergies. Analysis of two national databases shows that about half of the 10 most frequently prescribed drugs for children were studied under BPCA. Based on a survey of prescriptions written by physicians in 2004, 4 of the 10 drugs most frequently prescribed for children were studied under BPCA. A survey of families and their medical providers in 2003 found that 5 of the 10 drugs most frequently prescribed for children were studied under BPCA. In addition, several of the drugs studied under BPCA were for the treatment of diseases that are serious or life threatening to children, such as hypertension, cancer, HIV, and influenza. Table 4 provides information on some of the drugs studied for pediatric use and what is known about the diseases that are relevant to children. Some of the drugs were studied under BPCA to treat complicating conditions in children who had other diseases, while others treated rare diseases. For example a drug was studied for the treatment of painful bladder spasms in children who have spina bifida. Other drugs were studied to treat overactive bladder symptoms in children with spina bifida and cerebral palsy, to treat children who require chronic pain management because of severe illnesses such as cancer, and to treat partial seizures and epilepsy in children who require more than one drug to control seizures. About 12 percent of the 52 drugs that were granted pediatric exclusivity under BPCA were studied for the treatment of rare diseases, including certain types of leukemia, juvenile rheumatoid arthritis, and narcolepsy. HHS provided written comments on a draft of this report, which we have reprinted in appendix VIII. HHS stated that the draft report provided a significant amount of data and analysis and generally explains the BPCA process. HHS also made four general comments. First, HHS commented that the report does not sufficiently acknowledge the success of BPCA. HHS noted that BPCA provides additional incentives for the study of on- patent drugs, a process for the study of off-patent drugs, a safety review of all drugs granted pediatric exclusivity, and the public dissemination of information from pediatric studies conducted. HHS concluded that BPCA has generated more clinical information for the pediatric population than any other legislative or regulatory effort to date. Second, HHS commented that the report confuses FDA’s process for reviewing reports of drug studies conducted under BPCA with time frames for the labeling dispute resolution process outlined in BPCA. HHS suggested that we did not sufficiently acknowledge that some of the time it takes for FDA to approve labeling changes includes time spent by sponsors collecting and submitting additional information. Third, in commenting on our finding that few written requests included neonates, HHS pointed out that written requests for 9 drugs required the inclusion of “newborns” and written requests for 13 drugs required the inclusion of infants (children under 4 months of age). Fourth, HHS commented that we failed to mention that exclusivity attaches to patents as well as existing market exclusivity. We believe that the draft report sent to HHS for comment accurately and adequately addressed each of the four issues upon which HHS commented. An explicit discussion of the overall success of BPCA was outside the scope of this report, as directed by the BPCA mandate and as discussed with the committees of jurisdiction. Nevertheless, the draft report extensively discussed HHS accomplishments such as the number of studies conducted, the number and importance of labeling changes that FDA approved, and the wide range of diseases, including some that are common, serious, or life threatening to children, for which drugs were studied. In drafting our report we believe we clearly distinguished between FDA’s goals for completing its review and approval of drug applications and the time frames mandated for using the labeling dispute resolution process as outlined in BPCA. In finding that the process for approving labeling changes is lengthy, we clearly stated that the process included time spent during FDA’s initial review as well as time drug sponsors took to respond to FDA’s requests for additional information, which was as long as 1 year. We also acknowledged that FDA completed its initial review of applications within its 180-day goal. We stated in the draft that FDA has never used the dispute resolution process because labeling has never been the only issue preventing FDA’s approval of a label for more than 180 days. Nevertheless, we have included additional language in this report to further clarify the distinction between FDA’s review process for pediatric applications and labeling dispute resolution. Our draft clearly stated that while written requests issued under BPCA required the inclusion of neonates, the majority of those on-patent written requests—32 of 36—had been first issued under FDAMA. It is therefore not appropriate to attribute the inclusion of neonates in these written requests to BPCA. Further, we included in our count of written requests requiring the inclusion of neonates the 9 written requests that HHS referred to in its comments as requiring the inclusion of newborns. We did not specifically include in our counts the other 13 written requests mentioned in HHS’s comments. According to data provided by FDA, 1 of these written requests was not issued under BPCA, and 2 others were counted among the 9 mentioned above. The remaining 10 written requests were not specifically included in our counts, because the written requests were first issued prior to BPCA and do not specifically require the inclusion of neonates. The written requests to which HHS referred in its comments required the inclusion of very young children, age 0-4 months. Our draft report had indicated that written requests requiring the inclusion of young children might produce data about neonates. Our draft report included language that indicated the conditions under which pediatric exclusivity applies. We added language to the report to further clarify the conditions under which pediatric exclusivity can be granted. HHS provided technical comments which we incorporated as appropriate. HHS also stated that many of the oral comments provided by FDA were not reflected in the draft report sent to HHS for formal comment. Some of FDA’s suggested revisions and comments were outside the scope of the report and in some instances we chose to use alternative wording to that suggested by FDA for readability and consistency. As we did with HHS’s general and technical comments on this report, we previously incorporated FDA’s oral comments as appropriate. We are sending copies of this report to the Secretary of Health and Human Services, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-7119 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IX. In this report, we (1) assessed the extent to which pediatric drug studies were being conducted for on-patent drugs under the Best Pharmaceuticals for Children Act (BPCA), including when drug sponsors declined to conduct the studies; (2) evaluated the impact of BPCA on labeling of drugs for pediatric use and the process by which the labeling was changed; and (3) illustrated the range of diseases treated by the drugs studied under BPCA. Our review focused primarily on those on-patent drugs for which written requests were issued or reissued by the Department of Health and Human Services’ (HHS) Food and Drug Administration (FDA) from January 2002, when BPCA was enacted, through December 2005. Actions taken on these drugs after December 2005 (such as a determination of pediatric exclusivity or a labeling change) were not included in our review. In addition, we reviewed some summary data available about the number of written requests issued under the Food and Drug Administration Modernization Act of 1997 (FDAMA) from January 1998 through December 2001. We also reviewed pertinent laws, regulations, and legislative histories. To assess the extent to which pediatric drug studies were being conducted for on-patent drugs under BPCA, including when the drug sponsors declined to conduct the studies, we identified written requests issued for on-patent drugs from January 2002 through December 2005, and determined which of those were declined by drug sponsors. We also reviewed data provided by FDA on the nature of the pediatric drug studies that were conducted in response to the written requests issued under BPCA. We also examined notices published in the Federal Register, identifying the drugs designated by HHS’s National Institutes of Health (NIH) as most in need of study in children. We reviewed data provided to us by the Foundation for the National Institutes of Health (FNIH)—a nonprofit corporation independent of NIH—about funding for pediatric drug studies of on-patent drugs. We interviewed officials from FDA, NIH, and FNIH to understand the processes by which pediatric drug studies are prioritized by the agencies, written requests are issued, drug sponsors respond to written requests, study results are submitted to FDA, and pediatric exclusivity determinations are made. We also reviewed background material describing the role of FNIH in supporting research on children and the funding available for such research. To evaluate the impact of BPCA on the labeling of drugs for pediatric use and the process by which the labeling was changed, we reviewed data provided to us by FDA summarizing the changes made from January 2002 through December 2005 for drugs studied under BPCA. We also used the dates that the changes were approved in order to calculate how long it took for FDA to approve labeling changes. We interviewed officials from FDA about the process by which FDA approves labeling changes as well as the reasons why some drugs did not have labeling changes. To illustrate the range of diseases treated by the drugs studied under BPCA, we reviewed data provided by FDA about the disease each drug was proposed to treat. We also examined data from the Medical Expenditure Panel Survey—administered by the Agency for Healthcare Research and Quality—and the National Ambulatory Medical Care Survey—administered by the National Center for Health Statistics—to assess the extent to which the drugs studied under BPCA were prescribed to children. To obtain other information that is provided in appendixes to this report, we collected and analyzed a variety of data from FDA, NIH, and FNIH about written requests and pediatric studies for both on- and off-patent drugs. To obtain a broad perspective on the many issues addressed in our report, we also interviewed representatives of the pharmaceutical industry and health advocates—such as representatives of the American Academy of Pediatrics, the Pharmaceutical Research and Manufacturers of America, the Generic Pharmaceutical Association, the National Organization of Rare Disorders, Public Citizen, the Elizabeth Glaser Pediatric AIDS Foundation, and the Tufts Center for the Study of Drug Development. We evaluated the data used in this report and determined that they were sufficiently reliable for our purposes. We conducted our work from September 2005 through March 2007 in accordance with generally accepted government auditing standards. FDA and NIH have engaged in efforts to increase the inclusion of neonates—children under the age of 1 month—in pediatric drug studies. As part of its encouragement of pediatric studies in general, BPCA identified neonates as a specific group to be included in studies, as appropriate. An examination of the written requests revealed that only 4 of 36 written requests for on-patent drugs first issued under BPCA required the inclusion of neonates. Further, no written requests for on-patent drugs and only two written requests for off-patent drugs have required the inclusion of neonates since FDA and NIH held a workshop that began their major initiative in this regard in 2004. In 2003, NIH conducted three workshops focused on increasing the inclusion of neonates in pediatric drug studies and discussing diseases that affect neonates. In September 2003, NIH staff met to discuss drug studies in neonatology and pediatrics with special emphasis placed on ways to better apply current knowledge in future pediatric drug studies. Two months later, NIH met with a group of experts to discuss the use of the drug dobutamine—used to treat low blood pressure—in neonates. NIH ended 2003 with a 1-day seminar designed to address parental attitudes toward neonatal clinical trials. FDA and NIH have collaborated to develop the Newborn Drug Development Initiative (NDDI), a multiphase program intended to identify gaps in knowledge concerning neonatal pharmacology and pediatric drug study design and to explore novel designs for studies of drugs for use by neonates. The NDDI is intended to consist of a series of meetings that will help frame state-of-the-art approaches and research needs. After forming various discussion groups in February 2003, the agencies held a workshop in March 2004 to help frame issues and challenges associated with designing and conducting drug studies with neonates. The workshop addressed ethical issues and drug prioritization in four specialty areas: pain control, pulmonology (the study of conditions affecting the lungs and breathing), cardiology (the study of conditions affecting the heart), and neurology (the study of disorders of the brain and central nervous system). For example, participants in the pain control group reviewed data demonstrating that neonates who undergo multiple painful procedures and receive medication to treat pain may differ in their development of pain receptors compared to those who do not undergo such procedures and treatment. FDA officials said that FDA would apply the findings from the NDDI workshop to written requests for pediatric drug studies in the four specialty areas. NIH officials said that the Pediatric Formulations Initiative is a related effort. They said that both initiatives are long-standing activities that engage in various efforts to enhance information dissemination to improve all pediatric drug studies. According to NIH officials, these initiatives have resulted in numerous publications. FDA and NIH efforts to increase the inclusion of neonates in pediatric drug studies conducted under BPCA have been limited. Through 2005, 9 of 16 (56 percent) written requests for off-patent drugs required the inclusion of neonates in the pediatric drug studies. NIH is currently funding pediatric drug studies for four of these written requests. Similarly, 36 of 214 (17 percent) written requests for the study of on-patent drugs issued from January 2002 through December 2005 included a requirement to study neonates, but only 4 of those 36 (11 percent) were first issued under BPCA. The remaining 32 (89 percent) written requests were originally issued under FDAMA, which did not place an emphasis on the inclusion of neonates in pediatric drug studies. Further, all of the written requests requiring the inclusion of neonates were issued in 2003, prior to the NDDI. Further, only two of the written requests for off-patent drugs were issued after the NDDI, and studies for neither of those have been funded. According to information provided by FDA, no written requests for on-patent drugs issued from January 2004 through December 2005 required the inclusion of neonates. FDA officials indicated, however, that they receive information about neonates in response to written requests that do not specifically target them. According to these officials, many written requests require that children from birth through 2 years of age be studied. These pediatric drug studies therefore may include neonates. In addition, inclusion of neonates in some studies may not be appropriate for medical or ethical reasons. BPCA was designed in part to increase pediatric drug studies through federal efforts. NIH has engaged in several efforts to support pediatric drug studies since the passage of BPCA. While NIH plays an important role in providing funding for research for children, the amount provided by NIH to support such activities has not increased significantly under BPCA. Since the enactment of BPCA, NIH funding for children’s research has increased from $3.1 billion in fiscal year 2003 to $3.2 billion in fiscal year 2005. These figures represent about 11 percent of NIH’s total budget each year from 2003 through 2005. The research funds for children were distributed by most of NIH’s 28 institutes, centers, and offices. For example, in 2005, 24 of these institutes, centers, and offices funded research on children. One institute, the National Institute of Child Health and Human Development, was responsible for about 26 percent of funding for pediatric research—the largest proportion of NIH’s research funding for children. This institute organizes study design teams with FDA and other relevant NIH institutes, conducts contracting activities, and modifies drug labeling for specific ages and diseases. The number of pediatric pharmacology research units—initiated by NIH— devoted to studies for children has remained the same under BPCA. NIH provides about $500,000 annually to each of these research units to provide the infrastructure for independent investigators to initiate and collaborate on studies and clinical trials with private industry and NIH. The number of such research units grew from 7 in 1994 to 13 in 1999 to support the infrastructure for collaborative efforts of pharmacologists to conduct clinical trials that include children. While the number has not changed since the passage of BPCA in 2002, NIH officials said that staff from these units often move on to hospitals throughout the country and enhance the pediatric research capacity nationwide. In addition, they said that an overall increase in pediatric research capacity nationwide in recent years has made it possible to conduct pediatric clinical trials at a number of other sites. They said that, on average, these pediatric pharmacology research units conduct more than 50 pediatric drug studies annually. Of these, as many as 20 pediatric drug studies are funded by drug sponsors. NIH officials told us that of the seven off-patent drugs being studied under BPCA with NIH funding through 2005, two were being conducted by these research units. NIH officials said that since on-patent written requests are not published, the full contribution of the research units under BPCA cannot be ascertained. NIH has sponsored a number of forums designed to increase the number of children included in drug studies. As shown in table 5, these forums generated advice and suggestions for NIH concerning drug testing from health experts, process improvements on drug studies and medication use with the pediatric community, and explanations of models and data related to research for children. NIH has also conducted meetings and entered numerous intra-agency and FDA agreements to strengthen its relationship with FDA and establish a firm commitment to study medical issues relevant to children. For example, NIH conducted a series of internal meetings in fiscal year 2004 to identify ongoing pediatric drug studies by the National Institute of Mental Health. As an outcome of these meetings, NIH identified and utilized data sets related to the study of lithium as it is used for the treatment of bipolar disorder in children. NIH will use this information to enhance its current understanding of the drug’s therapeutic benefit. In addition to providing a mechanism to study on-patent drugs, BPCA also contains provisions for the study of off-patent drugs. FDA initiates its process by issuing a written request to the drug sponsor to study an off- patent drug. If the sponsor declines to study the drug, FDA can refer the study of the drug to NIH for funding. NIH initiates the BPCA process for off-patent drugs by prioritizing the list of drugs that need to be studied. BPCA includes a provision that provides for the funding of the study of off- patent drugs by NIH. BPCA requires that NIH—in consultation with FDA and other experts—publish an annual list of drugs for which additional studies are needed to assess their safety and effectiveness in children. FDA can then issue a written request for pediatric studies of the off-patent drugs on the list. If the written request is declined by the drug sponsor, NIH can fund the studies. Few off-patent drugs identified by NIH as in need of study for pediatric use have been studied. From 2003 through 2006, NIH has listed off-patent drugs that were recommended for study by experts in pediatric research and clinical practice. By 2005, NIH had identified 40 off-patent drugs that it believed should be studied for pediatric use. Through 2005, FDA issued written requests for 16 of these drugs. All but one of these written requests were declined by drug sponsors. NIH funded pediatric drug studies for 7 of the remaining 15 written requests declined by drug sponsors through December 2005. NIH provided several reasons why it has not pursued the study of some off-patent drugs that drug sponsors declined to study. Concerns about the incidence of the diseases that the drugs were developed to treat, the feasibility of study design, drug safety, and changes in the drugs’ patent status have caused the agency to reconsider the merit of studying some of the drugs it identified as important for study in children. For example, in one case NIH issued a request for proposals to study a drug but received no response. In other cases, NIH is awaiting consultation with pediatric experts to determine the potential for study. Further, NIH has not received appropriations specifically for funding pediatric drug studies under BPCA. Rather, according to agency officials, NIH uses lump sum appropriations made to various institutes to fund pediatric drug studies under BPCA. In fiscal year 2005, NIH spent approximately $25 million for these pediatric drug studies. NIH anticipates spending an estimated $52.5 million for pediatric drug studies following seven written requests to drug sponsors issued by FDA from January 2002 through December 2005. These pediatric drug studies were designed to take from 3 to 4 years and will be completed in 2007 at the earliest. Where possible, NIH identifies another government agency or institute within NIH that might be able to meet the requirements of the written requests and conduct the pediatric drug studies. In cases where a government agency will conduct the pediatric drug studies, NIH institutes enter into intra- or interagency agreements for the studies. If those efforts fail, the agency develops and publishes requests for proposals for others to conduct the pediatric studies. NIH anticipates spending approximately $16.0 million for the funding of pediatric drug studies of four additional off-patent drugs for which FDA did not issue written requests—and therefore are not covered by the requirements of BPCA—but three of these drugs have since been listed by NIH in the Federal Register as needing study in children. (See table 6.) The drugs whose study NIH is funding without written requests were selected because of special circumstances that raised their priority for funding. NIH funded the study of daunomycin and methotrexate—both cancer drugs—before placing them on its 2006 list of drugs for study in children. NIH officials told us that the Children’s Oncology Group of the National Cancer Institute was already working with an appropriate group of patients and was at a critical stage in developing the pediatric drug studies that would produce data for both drugs, so pediatric drug studies were funded before the drugs were placed on the priority list. NIH officials also told us that ketamine is administered to more than 30,000 children for sedation each year. Studies done in animals, however, have suggested that the drug may lead to cell death in the brain. As a result, the drug cannot be ethically tested in children. NIH is therefore collaborating with FDA to conduct studies in nonhuman primates. NIH officials report that methylphenidate is used by an estimated 2.5 million school-aged children to treat attention deficit hyperactivity disorder. However, a recent study suggested some potential genetic toxicity of the drug. Because of these findings, the drug was targeted as a priority and NIH was able to fund some of the planned studies related to this drug. From January 2002 through December 2005, FDA issued 214 written requests for the study of on-patent drugs. The agency also issued 16 written requests for the study of off-patent drugs. Fewer written requests were issued and more were declined by drug sponsors under BPCA than under FDAMA. From January 2002, when BPCA was enacted, through December 2005, FDA issued or reissued 214 written requests for on-patent drugs, and drug sponsors declined 41 of those. FDA issued 68 written requests under BPCA for the study of on-patent drugs, 20 (29 percent) of which were declined by the drug sponsors. FDA reissued 146 written requests for on-patent drugs that were originally issued under FDAMA because the pediatric drug studies had not been completed at the time BPCA went into effect. Included in the 146 were 21 (14 percent) written requests that were subsequently declined by the drug sponsors. Therefore, drug sponsors accepted 173 written requests for the study of on-patent drugs under BPCA during this period. Under FDAMA, FDA issued 227 written requests. Drug sponsors did not conduct pediatric drug studies or submit study results for 30 of the 227 (13 percent) written requests issued under FDAMA (see fig. 3). FDA officials offered two primary reasons why fewer written requests were issued under BPCA than under FDAMA. First, according to FDA officials, when FDAMA was enacted, FDA and some drug sponsors had already identified a large number of drugs that they believed needed to be studied for pediatric use. By the time BPCA was enacted, written requests for the study of these drugs had already been issued. Second, FDA officials said there was a surge of written requests prior to the sunset of FDAMA. Agency officials expect the same surge to occur prior to the sunset of the pediatric exclusivity provisions of BPCA in 2007. FDA officials also offered a number of reasons that the proportion of written requests issued under BPCA that were declined was greater than that for those issued under FDAMA. While FDA does not track the reasons that drug sponsors decline specific written requests, FDA officials expect that a major reason that the written requests were declined is that the agency sometimes requests more extensive pediatric drug studies, and therefore more costly studies, than the sponsors would like to do. This may be the case even when the drug sponsors initiated the written request process. FDA officials said that upon consideration of FDA’s written requests, drug sponsors may make a business decision not to conduct the requested pediatric drug studies because they may be too costly for the expected return associated with pediatric exclusivity. Agency officials reported that since the drugs studied under FDAMA were more likely to be those with the greatest expected financial return or the easiest to study, they are not surprised at the higher proportion of pediatric drug studies declined under BPCA. Further, under BPCA drug sponsors are required to pay user fees—as high as $767,400 in fiscal year 2006—when study results are submitted for pediatric exclusivity consideration. As a result, the process of gaining pediatric exclusivity has become more expensive than it was under FDAMA when drug sponsors were exempt from such fees for pediatric drug studies. FDA officials said they are not discouraged by the increase in the number of written requests that have been declined. In 2001, FDA reported to Congress that the agency expected drug sponsors to conduct pediatric drug studies for 80 percent of written requests. The rate at which written requests for studies of on-patent drugs were accepted under BPCA— 71 percent—is close to the target of 80 percent, and it is substantially larger than the 15 to 30 percent of drugs that FDA officials have reported were labeled for pediatric use prior to the authorization of pediatric exclusivity under FDAMA and BPCA. The pediatric drug studies conducted under BPCA were complex and sizable, involving a large number of study sites and children. From July 2002 through December 2005, drug sponsors submitted study reports to FDA in response to 59 written requests. FDA made pediatric exclusivity determinations for 55 of those written requests by December 2005, and most—51, or 93 percent—were made in 90 days or less. For the 59 written requests for which study results were submitted to FDA, a total of 143 pediatric drug studies were conducted at 2,860 different study sites with more than 25,000 children participating (see table 7). In December 2005, FDA projected that for the drugs for which studies had not yet been submitted for review, there would be nearly 20,000 more children participating in the studies. Officials from FDA and NIH discussed a number of important strengths of BPCA. In our interviews with industry group representatives and in a public forum, a number of suggestions have also been made for ways that BPCA could be improved. FDA officials identified a number of important strengths of BCPA. Specifically, they commented on the following: Economic incentives to conduct pediatric drug studies. Because of the economic incentives in BPCA, FDA officials argue that many logistical issues inherent in conducting pediatric drug studies have been overcome. FDA may also issue a written request for pediatric drug studies for rare conditions, offering an additional incentive to develop medications for rare diseases that occur only in children. Availability of summaries of pediatric drug studies. FDA officials reported that the public dissemination of study summaries has ensured that study information is available to the health care community and has been useful to prescribers to know what has been learned about drugs’ use in children. Broad scope of pediatric drug studies. BPCA allows FDA to issue written requests for pediatric drug studies for the treatment of any disease, regardless of whether the drug in question is currently indicated to treat that disease in adults. For example, FDA issued a written request for the study of a drug currently indicated to treat prostate cancer. The drug is being tested in children to see if it is effective in treating early puberty in boys. Use of dispute resolution as a negotiating tool in ensuring labeling changes. Although FDA has never invoked its authority under BPCA to use the dispute resolution process for making labeling changes, it has been an important negotiating tool. FDA officials indicated that when the agency has expressed its intention to use the process, the issues that had been raised in labeling negotiations were effectively resolved. Improved safety through focused pediatric safety reviews. BPCA’s requirement that FDA conduct additional monitoring of adverse event reports for 1 year after a drug is granted pediatric exclusivity has been useful to FDA in prioritizing safety issues for children. For example, an analysis of a drug 1 year after pediatric exclusivity was granted showed that there were deaths among children as a result of overuse or misuse of the drug. This led the agency to amend the labeling regarding the appropriate population for the drug. NIH officials said they have found the process of developing the list of drugs important for study in children to be extremely helpful. NIH officials told us that since the inception of BPCA, they have learned a great deal about existing gaps in the drug development process for children, including a lack of data about which drugs are used by children and how frequently. To gather additional information, NIH has contracted for literature reviews to decrease the possibility that unnecessary pediatric drug studies are conducted. These officials also stated that BPCA and the development of the priority list have helped to solidify an alliance between NIH and FDA, which has led to discussions and resolutions of scientific and ethical issues relating to pediatric drug studies. The Institute of Medicine convened a forum on pediatric research in June 2006 where forum participants made suggestions for how BPCA could be improved. In addition, we discussed suggestions for improving BPCA with interest group representatives. Forum participants suggested that the timing of the determination of pediatric exclusivity should parallel the scientific review of a drug application and that both should be within 180 days of FDA receiving the results from the pediatric drug studies. FDA’s ability to assess the overall quality of the pediatric drug studies in the 90 days currently allotted for the review was questioned. Some forum participants also stated that a longer review period could result in different determinations in some cases. For example, FDA’s scientific review of data related to the study of one drug showed that the children participating in the pediatric drug studies had not received the treatments as the drug sponsors had suggested in their description of the study results. While the agency had granted the drug sponsor pediatric exclusivity based on its 90-day review to determine pediatric exclusivity, it might not have done so based on what was learned during the longer, 180-day scientific review. In addition, it was suggested that drug sponsors be required to submit their study results for pediatric exclusivity determination at least 1 year prior to patent expiration. This would allow the generic drug industry time to better plan its release of drugs. We were told that sometimes generic drugs have had to be destroyed because pediatric exclusivity determinations were made after the generic version of the drug had been manufactured and the drug’s expiration date would not allow the product to be sold. Representatives from interest groups would like the written requests to be public information and would also like FDA to publicly announce when it receives study results that have been submitted in response to a written request. This would allow the generic drug industry to better schedule the introduction of generic drugs into the market. Other suggestions for how the study of off-patent drugs could be more effectively encouraged were offered at the forum. A forum participant suggested that methods similar to those being adopted by the European Union be implemented. According to forum participants, under new legislation in Europe, companies that study off-patent drugs will be offered a variety of incentives, such as 10 years of data protection (meaning that the data generated to support the marketing of the drug cannot be used to support another drug, in an effort to delay competition), the right to use the existing brand name (to enable the drug sponsor to capitalize on existing brand recognition), and the ability to add a symbol to the drug labeling indicating the drug has been studied in children. Another suggestion was that current fees paid by drug sponsors for review of their drug applications could be used to fund the study of off-patent drugs (as well as on-patent drugs that drug sponsors decline to study). These fees—$767,400 for a new drug application and $383,700 for a supplemental drug application in fiscal year 2006—are collected from drug sponsors when study results are submitted to FDA for review and consideration of pediatric exclusivity. In addition to the contact named above, Thomas Conahan, Assistant Director; Shaunessye Curry; Cathleen Hamann; Martha Kelly; Julian Klazkin; Carolyn Feis Korman; Gloria Taylor; and Suzanne Worth made key contributions to this report.
About two-thirds of drugs that are prescribed for children have not been studied and labeled for pediatric use, which places children at risk of being exposed to ineffective treatment or incorrect dosing. The Best Pharmaceuticals for Children Act (BPCA), enacted in 2002, encourages the manufacturers, or sponsors, of drugs that still have marketing exclusivity--that is, are on-patent--to conduct pediatric drug studies, as requested by the Food and Drug Administration (FDA). If they do so, FDA may extend for 6 months the period during which no equivalent generic drugs can be marketed. This is referred to as pediatric exclusivity. BPCA required that GAO assess the effect of BPCA on pediatric drug studies and labeling. As discussed with the committees of jurisdiction, GAO (1) assessed the extent to which pediatric drug studies were being conducted under BPCA for on-patent drugs, including when drug sponsors declined to conduct the studies; (2) evaluated the impact of BPCA on labeling drugs for pediatric use and the process by which the labeling was changed; and (3) illustrated the range of diseases treated by the drugs studied under BPCA. GAO examined data about the drugs for which FDA requested studies under BPCA from 2002 through 2005. GAO also interviewed officials from relevant federal agencies, pharmaceutical industry representatives, and health advocates. Drug sponsors have initiated pediatric drug studies for most of the on-patent drugs for which FDA has requested studies, but no drugs were being studied when drug sponsors declined these requests. Sponsors agreed to 173 of the 214 written requests for pediatric studies of on-patent drugs. In cases where drug sponsors decline to study the drugs, BPCA provides for FDA to refer the study of these drugs to the Foundation for the National Institutes of Health (FNIH), a nonprofit corporation. FNIH had not funded studies for any of the nine drugs that FDA referred as of December 2005. Most drugs (about 87 percent) granted pediatric exclusivity under BPCA had labeling changes--often because the pediatric drug studies found that children may have been exposed to ineffective drugs, ineffective dosing, overdosing, or previously unknown side effects. However the process for approving labeling changes was often lengthy. It took from 238 to 1,055 days for information to be reviewed and labeling changes to be approved for 18 drugs (about 40 percent), and 7 of those took more than 1 year. Drugs were studied under BPCA for the treatment of a wide range of diseases, including those that are common, serious, or life threatening to children. These drugs represented more than 17 broad categories of disease, such as cancer. The Department of Health and Human Services stated that the report provides a significant amount of data and analysis and generally explains the BPCA process, but expressed concern that it did not sufficiently acknowledge the success of BPCA or clearly describe some elements of FDA's process. GAO incorporated comments as appropriate.
You are an expert at summarizing long articles. Proceed to summarize the following text: DHS has begun to take action to work with other agencies to identify facilities that are required to report their chemical holdings to DHS but may not have done so. The first step of the CFATS process is focused on identifying facilities that might be required to participate in the program. The CFATS rule was published in April 2007, and appendix A to the rule, published in November 2007, listed 322 chemicals of interest and the screening threshold quantities for each. As a result of the CFATS rule, about 40,000 chemical facilities reported their chemical holdings and their quantities to DHS’s ISCD. In August 2013, we testified about the ammonium nitrate explosion at the chemical facility in West, Texas, in the context of our past CFATS work. Among other things, the hearing focused on whether the West, Texas, facility should have reported its holdings to ISCD given the amount of ammonium nitrate at the facility. During this hearing, the Director of the CFATS program remarked that throughout the existence of CFATS, DHS had undertaken and continued to support outreach and industry engagement to ensure that facilities comply with their reporting requirements. However, the Director stated that the CFATS regulated community is large and always changing and DHS relies on facilities to meet their reporting obligations under CFATS. At the same hearing, a representative of the American Chemistry Council testified that the West, Texas, facility could be considered an “outlier” chemical facility, that is, a facility that stores or distributes chemical-related products, but is not part of the established chemical industry. Preliminary findings of the CSB investigation of the West, Texas, incident showed that although certain federal agencies that regulate chemical facilities may have interacted with the facility, the ammonium nitrate at the West, Texas, facility was not covered by these programs. For example, according to the findings, the Environmental Protection Agency’s (EPA) Risk Management Program, which deals with the accidental release of hazardous substances, covers the accidental release of ammonia, but not ammonium nitrate. As a result, the facility’s consequence analysis considered only the possibility of an ammonia leak and not an explosion of ammonium nitrate. On August 1, 2013, the same day as the hearing, the President issued Executive Order 13650–Improving Chemical Facility Safety and Security, which was intended to improve chemical facility safety and security in coordination with owners and operators. The executive order established a Chemical Facility Safety and Security Working Group, composed of representatives from DHS; EPA; and the Departments of Justice, Agriculture, Labor, and Transportation, and directed the working group to identify ways to improve coordination with state and local partners; enhance federal agency coordination and information sharing; modernize policies, regulations and standards; and work with stakeholders to identify best practices. In February 2014, DHS officials told us that the working group has taken actions in the areas described in the executive order. For example, according to DHS officials, the working group has held listening sessions and webinars to increase stakeholder input, explored ways to share CFATS data with state and local partners to increase coordination, and launched a pilot program in New York and New Jersey aimed at increasing federal coordination and information sharing. DHS officials also said that the working group is exploring ways to better share information so that federal and state agencies can identify non-compliant chemical facilities and identify options to improve chemical facility risk management. This would include considering options to improve the safe and secure storage, handling, and sale of ammonium nitrate. DHS has also begun to take actions to enhance its ability to assess risk and prioritize facilities covered by the program. For the second step of the CFATS process, facilities that possess any of the 322 chemicals of interest at levels at or above the screening threshold quantity must first submit data to ISCD via an online tool called a Top- Screen.an assessment as to whether facilities are covered under the program. If DHS determines that they are covered by CFATS, facilities are to then submit data via another online tool, called a security vulnerability assessment, so that ISCD can further assess their risk and prioritize the ISCD uses the data submitted in facilities’ Top Screens to make covered facilities. ISCD uses a risk assessment approach to develop risk scores to assign chemical facilities to one of four final tiers. Facilities placed in one of these tiers (tier 1, 2, 3, or 4) are considered to be high risk, with tier 1 facilities considered to be the highest risk. The risk score is intended to be derived from estimates of consequence (the adverse effects of a successful attack), threat (the likelihood of an attack), and vulnerability (the likelihood of a successful attack, given an attempt). ISCD’s risk assessment approach is composed of three models, each based on a particular security issue: (1) release, (2) theft or diversion, and (3) sabotage, depending on the type of risk associated with the 322 chemicals. Once ISCD estimates a risk score based on these models, it assigns the facility to a final tier. Our prior work showed that the CFATS program was using an incomplete risk assessment approach to assign chemical facilities to a final tier. Specifically, in April 2013, we reported that the approach ISCD used to assess risk and make decisions to place facilities in final tiers did not consider all of the elements of consequence, threat, and vulnerability associated with a terrorist attack involving certain chemicals. For example, the risk assessment approach was based primarily on consequences arising from human casualties, but did not consider economic criticality consequences, as called for by the 2009 National Infrastructure Protection Plan (NIPP) and the CFATS regulation. In April 2013, we reported that ISCD officials told us that, at the inception of the CFATS program, they did not have the capability to collect or process all of the economic data needed to calculate the associated risks and they were not positioned to gather all of the data needed. They said that they collected basic economic data as part of the initial screening process; however, they would need to modify the current tool to collect more sufficient data. We also found that the risk assessment approach did not consider threat for approximately 90 percent of tiered facilities. Moreover, for the facilities that were tiered using threat considerations, ISCD was using 5-year-old data. We also found that ISCD’s risk assessment approach was not consistent with the NIPP because it did not consider vulnerability when developing risk scores. When assessing facility risk, ISCD’s risk assessment approach treated every facility as equally vulnerable to a terrorist attack regardless of location and on-site security. As a result, in April 2013 we recommended that ISCD enhance its risk assessment approach to incorporate all elements of risk and conduct a peer review after doing so. ISCD agreed with our recommendations, and in February 2014, ISCD officials told us that they were taking steps to address them and recommendations of a recently released Homeland Security Studies and Analysis Institute (HSSAI) report that examined the CFATS risk assessment model. As with the findings in our report, HSSAI found, among other things, that the CFATS risk assessment model inconsistently considers risks across different scenarios and that the model does not adequately treat facility vulnerability. Overall, HSSAI recommended that ISCD revise the current risk-tiering model and create a standing advisory committee—with membership drawn from government, expert communities, and stakeholder groups—to advise DHS on significant changes to the methodology. In February 2014, senior ISCD officials told us that they have developed an implementation plan that outlines how they plan to modify the risk assessment approach to better include all elements of risk while incorporating our findings and recommendations and those of HSSAI. Moreover, these officials stated that they have completed significant work with Sandia National Laboratory with the goal of including economic consequences into their risk tiering approach. They said that the final results of this effort to include economic consequences will be available in the summer of 2014. With regard to threat and vulnerability, ISCD officials said that they have been working with multiple DHS components and agencies, including the Transportation Security Administration and the Coast Guard, to see how they consider threat and vulnerability in their risk assessment models. ISCD officials said that they anticipate that the changes to the risk tiering approach should be completed within the next 12 to 18 months. We plan to verify this information as part of our recommendation follow-up process. DHS has begun to take action to lessen the time it takes to review site security plans which could help DHS reduce the backlog of plans awaiting review. For the third step of the CFATS process, ISCD is to review facility security plans and their procedures for securing these facilities. Under the CFATS rule, once a facility is assigned a final tier, it is to submit a site security plan or participate in an alternative security program in lieu of a site security plan. The security plan is to describe security measures to be taken and how such measures are to address applicable risk-based performance standards. After ISCD receives the site security plan, the plan is reviewed using teams of ISCD employees (i.e., physical, cyber, chemical, and policy specialists), contractors, and ISCD inspectors. If ISCD finds that the requirements are satisfied, ISCD issues a letter of authorization to the facility. After ISCD issues a letter of authorization to the facility, ISCD is to then inspect the facility to determine if the security measures implemented at the site comply with the facility’s authorized plan. If ISCD determines that the site security plan is in compliance with the CFATS regulation, ISCD approves the site security plan, and issues a letter of approval to the facility, and the facility is to implement the approved site security plan. In April 2013, we reported that it could take another 7 to 9 years before ISCD would be able to complete reviews of the approximately 3,120 plans in its queue at that time. As a result, we estimated that the CFATS regulatory regime, including compliance inspections (discussed in the next section), would likely not be implemented for 8 to 10 years. We also noted in April 2013 that ISCD had revised its process for reviewing facilities’ site security plans. ISCD officials stated that they viewed ISCD’s revised process to be an improvement because, among other things, teams of experts reviewed parts of the plans simultaneously rather than sequentially, as had occurred in the past. In April 2013, ISCD officials said that they were exploring ways to expedite the process, such as streamlining inspection requirements. In February 2014, ISCD officials told us that they are taking a number of actions intended to lessen the time it takes to complete reviews of remaining plans including the following: providing updated internal guidance to inspectors and ISCD updating the internal case management system; providing updated external guidance to facilities to help them better prepare their site security plans; conducting inspections using one or two inspectors at a time over the course of 1 day, rather than multiple inspectors over the course of several days; conducting pre-inspection calls to the facility to help resolve technical issues beforehand; creating and leveraging the use of corporate inspection documents (i.e., documents for companies that have over seven regulated facilities in the CFATS program); supporting the use of alternative security programs to help clear the backlog of security plans because, according to DHS officials, alternative security plans are easier for some facilities to prepare and use; and taking steps to streamline and revise some of the on-line data collection tools such as the site security plan to make the process faster. It is too soon to tell whether DHS’s actions will significantly reduce the amount of time needed to resolve the backlog of site security plans because these actions have not yet been fully implemented. In April 2013, we also reported that DHS had not finalized the personnel surety aspect of the CFATS program. The CFATS rule includes a risk- based performance standard for personnel surety, which is intended to provide assurance that facility employees and other individuals with access to the facility are properly vetted and cleared for access to the facility. In implementing this provision, we reported that DHS intended to (1) require facilities to perform background checks on and ensure appropriate credentials for facility personnel and, as appropriate, visitors with unescorted access to restricted areas or critical assets, and (2) check for terrorist ties by comparing certain employee information with the federal government’s consolidated terrorist watch list. However, as of February 2014, DHS had not finalized its information collection request that defines how the personnel surety aspect of the performance standards will be implemented. Thus, DHS is currently approving facility security plans conditionally whereby plans are not to be finally approved until the personnel surety aspect of the program is finalized. According to ISCD officials, once the personnel surety performance standard is finalized, they plan to reexamine each conditionally approved plan. They would then make final approval as long as ISCD had assurance that the facility was in compliance with the personnel surety performance standard. As an interim step, in February 2014, DHS published a notice about its Information Collection Request (ICR) for personnel surety to gather information and comments prior to submitting the ICR to the Office According of Management and Budget (OMB) for review and clearance.to ISCD officials, it is unclear when the personnel surety aspect of the CFATS program will be finalized. During a March 2013 hearing on the CFATS program, industry officials discussed using DHS’s Transportation Worker Identification Credential (TWIC) as one approach for implementing the personnel surety program. The TWIC, which is also discussed in DHS’s ICR, is a biometric credential issued by DHS for maritime workers who require unescorted access to secure areas of facilities and vessels regulated under the Maritime Transportation Security Act of 2002 (MTSA). In discussing TWIC in the context of CFATS during the August 2013 hearing, officials representing some segments of the chemical industry stated that they believe that using TWIC would lessen the reporting burden and prevent facilities from having to submit additional personnel information to DHS while maintaining the integrity of the program. In May 2011, and May 2013, we reported that the TWIC program has some shortfalls—including challenges in development, testing, and implementation—that may limit its usefulness with regard to the CFATS program. We recommended that DHS take steps to resolve these issues, including completing a security assessment that includes addressing internal controls weaknesses, among other things. The explanatory statement accompanying the Consolidated Appropriations Act, 2014, directed DHS to complete the recommended security assessment.February 2014, DHS had not yet done the assessment, and although However, as of DHS had taken some steps to conduct an internal control review, it had not corrected all the control deficiencies identified in our report. DHS reports that it has begun to perform compliance inspections at regulated facilities. The fourth step in the CFATS process is compliance inspections by which ISCD determines if facilities are employing the measures described in their site security plans. During the August 1, 2013, hearing on the West, Texas, explosion, the Director of the CFATS program stated that ISCD planned to begin conducting compliance inspections in September 2013 for facilities with approved site security plans. The Director further noted that the inspections would generally be conducted approximately 1 year after plan approval. According to ISCD, as of February 24, 2014, ISCD had conducted 12 compliance inspections. ISCD officials stated that they have considered using third-party non- governmental inspectors to conduct inspections but thus far do not have any plans to do so. In closing, we anticipate providing oversight over the issues outlined above and look forward to helping this and other committees of Congress continue to oversee the CFATS program and DHS’s progress in implementing this program. Currently, the explanatory statement accompanying the Consolidated and Further Continuing Appropriations Act, 2013, directs GAO to continue its ongoing effort to examine the extent to which DHS has made progress and encountered challenges in developing CFATS. Additionally, once the CFATS program begins performing and completing a sufficient number of compliance inspections, we are mandated review those inspections along with various aspects of them. Chairman Carper, Ranking Member Coburn, and members of the Committee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For information about this statement please contact Stephen L. Caldwell, at (202) 512-9610 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals making key contributions to this and our prior work included John F. Mortin, Assistant Director; Jose Cardenas, Analyst-in-Charge; Chuck Bausell; Michele Fejfar; Jeff Jensen; Tracey King; Marvin McGill; Jessica Orr; Hugh Paquette, and Ellen Wolfe. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Facilities that produce, store, or use hazardous chemicals could be of interest to terrorists intent on using toxic chemicals to inflict mass casualties in the United States. As required by statute, DHS issued regulations establishing standards for the security of these facilities. DHS established the CFATS program to assess risk at facilities covered by the regulations and inspect them to ensure compliance. This statement provides observations on DHS efforts related to the CFATS program. It is based on the results of previous GAO reports issued in July 2012 and April 2013 and a testimony issued in February 2014. In conducting the earlier work, GAO reviewed DHS reports and plans on the program and interviewed DHS officials. In managing its Chemical Facility Anti-Terrorism Standards (CFATS) program, the Department of Homeland Security (DHS) has a number of efforts underway to identify facilities that are covered by the program, assess risk and prioritize facilities, review and approve facility security plans, and inspect facilities to ensure compliance with security regulations. Identifying facilities. DHS has begun to work with other agencies to identify facilities that should have reported their chemical holdings to CFATS, but may not have done so. DHS initially identified about 40,000 facilities by publishing a CFATS rule requiring that facilities with certain types and quantities of chemicals report certain information to DHS. However, a chemical explosion in West, Texas last year demonstrated the risk posed by chemicals covered by CFATS. Subsequent to this incident, the President issued Executive Order 13650 which was intended to improve chemical facility safety and security in coordination with owners and operators. Under the executive order, a federal working group is sharing information to identify additional facilities that are to be regulated under CFATS, among other things. Assessing risk and prioritizing facilities. DHS has begun to enhance its ability to assess risks and prioritize facilities. DHS assessed the risks of facilities that reported their chemical holdings in order to determine which ones would be required to participate in the program and subsequently develop site security plans. GAO's April 2013 report found weaknesses in multiple aspects of the risk assessment and prioritization approach and made recommendations to review and improve this process. In February 2014, DHS officials told us they had begun to take action to revise the process for assessing risk and prioritizing facilities. Reviewing security plans. DHS has also begun to take action to speed up its reviews of facility security plans. Per the CFATS regulation, DHS is to review security plans and visit the facilities to make sure their security measures meet the risk-based performance standards. GAO's April 2013 report found a 7- to 9-year backlog for these reviews and visits, and DHS has begun to take action to expedite these activities. As a separate matter, one of the performance standards—personnel surety, under which facilities are to perform background checks and ensure appropriate credentials for personnel and visitors as appropriate—is being developed. Of the facility plans DHS had reviewed as of February 2014, it conditionally approved these plans pending final development of the personal surety performance standard. According to DHS officials, it is unclear when the standard will be finalized. Inspecting to verify compliance. In February 2014, DHS reported it had begun to perform inspections at facilities to ensure compliance with their site security plans. According to DHS, these inspections are to occur about 1 year after facility site security plan approval. Given the backlog in plan approvals, this process has started recently and GAO has not yet reviewed this aspect of the program. In a July 2012 report, GAO recommended that DHS measure its performance implementing actions to improve its management of CFATS. In an April 2013 report, GAO recommended that DHS enhance its risk assessment approach to incorporate all elements of risk, conduct a peer review, and gather feedback on its outreach to facilities. DHS concurred and has taken actions or has actions underway to address them.
You are an expert at summarizing long articles. Proceed to summarize the following text: FCA is an independent federal regulatory agency responsible for supervising, regulating, and examining institutions operating under the Farm Credit Act of 1971, as amended. The act also authorizes FCA to assess the institutions it regulates to provide funds for its annual operating costs and to maintain a reserve amount for contingencies, as applicable.FCA regulations allow several methods for FCA to assess and apportion its administrative expenses among the various types of institutions it oversees. These institutions include primary market institutions (banks and associations) and related entities that collectively comprise the System, in addition to Farmer Mac (a secondary market entity). As of September 30, 2000, the System (excluding Farmer Mac) included 172 institutions holding assets of about $91 billion; Farmer Mac’s assets were about $3 billion. The System is designed to provide a dependable and affordable source of credit and related services to the agriculture industry. FCA regulates and examines Farmer Mac, the secondary agricultural credit market entity, through the Office of Secondary Market Oversight (OSMO), which is an independent office with a staff of two within FCA.Figure 1 depicts the regulatory relationships among FCA, OSMO, the System, and Farmer Mac. Farmer Mac was created to provide a secondary market to improve the availability of agricultural and rural housing mortgage credit to lenders and borrowers. Both the System and Farmer Mac are government-sponsored enterprises (GSE). Although FCA does not receive any funds from the U.S. Treasury for its operating budget, its annual budget is subject to the annual congressional appropriations process, which limits the dollar amount that the agency can spend on administrative expenses. For 2000, that amount was $35.8 million. FCA raises operating funds from several sources, but most of these funds are from assessments on the institutions that it regulates. Assessments accounted for about 94 percent (including 2 percent for Farmer Mac) of the funding for the FCA’s 2000 operating budget, with the balance coming from reimbursable services, investment income, and miscellaneous income (see fig. 2). FCA officials define administrative expenses as generally comprising personnel compensation, official travel and transportation, relocation expenses, and other operating expenses necessary for the proper administration of the act. FCA also has reimbursable expenses, which include the expenses it incurs in providing services and products to another entity. The five other federal financial regulators discussed in this report have oversight responsibility for various types of institutions. Table 1 shows these regulators, along with the types of institutions that they regulate. For purposes of comparison, we group the regulators into two categories according to the types of market primarily or exclusively served by the institutions they regulate, primary and secondary market entities. Of the five regulators, four—FHFB, NCUA, OCC, and OTS—regulate primary market institutions. OFHEO regulates secondary market entities. FHFB regulates the 12 Federal Home Loan Banks (FHLBanks) that lend on a secured basis to their member retail financial institutions. Under certain approved programs and subject to regulatory requirements, the FHLBanks also are authorized to acquire mortgages from their members. By law, federal financial regulators are required to examine their regulated institutions on a periodic basis (e.g., annually). The primary purpose of these supervisory examinations is to assess the safety and soundness of the regulated institution’s practices and operations. The examination process rates six critical areas of operations—capital adequacy (C), asset quality (A), management (M), earnings (E), liquidity (L), and sensitivity to market risk (S), or CAMELS. The rating system uses a 5-point scale (with 1 as the best rating and 5 as the worst rating) to determine the CAMELS rating that describes the financial and management condition of the institution. Examiners issue a rating for each CAMELS element and an overall composite rating. The results of an examination, among other things, determine the extent of ongoing supervisory oversight. To varying degrees, the regulators also have responsibility for ensuring their institutions’ compliance with consumer protection laws. Moreover, two GSE regulators (FCA and FHFB) have responsibilities for ensuring compliance with their respective GSEs’ statutory missions. Mission and safety and soundness oversight for Fannie Mae and Freddie Mac are divided. The Department of Housing and Urban Development has general regulatory authority over Fannie Mae and Freddie Mac to ensure compliance with their missions, while OFHEO has the authority for safety and soundness regulation. To meet the first objective, we examined agency budget reports and financial documents and interviewed FCA and Farmer Mac officials. We compared FCA’s reported actual administrative expenses (total operating expenses less reimbursable costs) with congressionally imposed limits; reviewed relevant statutes, legislative history, FCA regulations, and FCA legal opinions; and developed a 5-year trend analysis. To address the second objective, we interviewed agency officials, reviewed relevant statutes and regulations, and analyzed data on operational funding obtained from FCA and the five other federal financial regulatory agencies. We selected these five agencies because they use funding mechanisms that are similar to FCA’s to support their operating budgets. We did not independently verify the accuracy of the data that the regulators provided or review any agency’s accounting records. We obtained comments on a draft of this report from FCA and the five other federal financial regulatory agencies. FCA’s comments are summarized at the end of this report. Except for OFHEO, all agencies provided technical comments, which we incorporated as appropriate. We conducted our work from January to July 2001 at FCA headquarters in McLean, VA, and at the headquarters of the other five regulators in Alexandria, VA, and Washington, D.C. We conducted our review in accordance with generally accepted government auditing standards. Over the last 5 years, FCA has reduced expenditures for administrative expenses, reflecting the agency’s success in controlling operating costs. Staff reductions—due, in part, to consolidation within the System—have accounted for most of the decline in administrative expenditures. While actual administrative expenditure amounts have varied from year to year, FCA has continued to operate below congressionally approved spending levels. Significant dollar decreases in personnel costs were largely responsible for the decrease in administrative spending and the 5.8 percent decline compared with the 8.59 percent growth rate in federal government expenditures. Despite increases in purchases of other contractual services and equipment, administrative costs remained below the 1996 level throughout the second half of the 1990s and into 2000 (see table 2). The decline was not spread evenly over the 5-year period (see fig. 3). Most of the decline occurred in 1996-98, and administrative spending has increased each year since then. For 2001, administrative expenditures are expected to rise by $852,000, or 2.6 percent, over their 2000 level, primarily because of rising costs for personnel, travel, and transportation. Our analysis of FCA data shows that personnel costs accounted for over 80 percent of the FCA administrative expenses during the 5-year study period. But these costs (staff salaries and benefits) also decreased the most in dollar and percentage terms during the period, falling by about $4.1 million (13 percent), and the share of personnel costs in administrative expenditures fell from 88.7 percent to 81.7 percent. Reductions in benefits were largely responsible for this decline; the amount spent on staff benefits dropped 36.3 percent, falling from $7.3 million in 1996 to $4.6 million in 2000. Decreases in the relocation allowances, severance pay, and buyouts necessitated by the consolidation of the System accounted for most of the decline. FCA officials told us that the number of employees fell almost 15 percent—from 331 in 1996 to 282 in 2000—in part, because of the industry consolidation. The number of institutions in the System dropped by 28 percent, declining from 239 in 1996 to 172 in 2000. For 2001, however, FCA projects personnel costs to increase by 5.3 percent to about $28.8 million. As a result, our analysis shows that these costs will continue to account for a substantial percentage of administrative costs. FCA officials attribute the increase to the rising cost of employee salaries and performance bonuses. Equipment purchases and other contractual services accounted for the largest increases in administrative expenditures in 1996 through 2000. Equipment purchases experienced the largest growth but fell behind contractual services in actual dollar increases. Equipment purchases rose about $1.1 million (from $395,000 in 1996 to $1.5 million in 2000), which was about a 268-percent increase over 1996. According to an FCA official, computer replacements and upgrades, which the agency undertakes every 3 years, accounted mostly for the increase. FCA officials expect equipment purchases to decline $202,000, or about 14 percent, in 2001. Other contractual services represented a growing percentage of FCA administrative costs, increasing from 2.8 percent in 1996 to 6.8 percent of the 2000 total. These expenses consisted mostly of consulting services for a new financial management system purchased from another government agency. They accounted for the largest dollar increase (about $1.3 million) and the second-largest percentage increase (about 130 percent) in administrative expenditures, climbing from $992,000 in 1996 to $2.3 million in 2000. For 2001, however, FCA expects this cost component to decline by $209,000, or 9.2 percent. Travel and transportation expenses declined (by about 10 percent) between 1996 and 2000. FCA officials told us the decrease was largely the result of a decline in the number of employee relocations. For 2001, FCA projects these costs to decrease by $231,000, or about 15 percent. All other expenses, a category that includes rent, communications, and utilities; printing and reproduction; supplies and materials; and insurance claims and indemnities, decreased by $79,000, or 8.3 percent, over the period, primarily because of decreases in supplies and materials. For 2001, FCA expects these costs to increase by 4.3 percent. Figure 4 shows FCA administrative expenses for 2000 by expense category. Each fiscal year, Congress sets a limit on the amount of money FCA can spend on administrative expenditures. However, Congress did not set a spending limit for 1996. For each year from 1997 to 2000, FCA was in compliance with its budget limits for administrative expenses (see table 3). FCA and the other federal financial regulators do not receive any federal money to fund their annual operating budgets, relying primarily on assessment revenue collected from the institutions they oversee. In general, the regulators assess institutions using either complex asset- based formulas or less complex formulas that are based on other factors, depending on the type of institution. The different funding methodologies are designed to ensure that each institution pays an equitable share of agency expenses. FCA uses two different methods of calculating assessments on the institutions it regulates—one for all primary market entities and the other for its secondary market entity, Farmer Mac. The methodology used for primary market entities, which is complex, is based on the institutions’ asset holdings and economies of scale as well as on the supervisory rating each institution received during FCA’s last periodic examination. The methodology used for Farmer Mac is less complex. FCA calculates the assessment on the basis of its own direct and indirect expenses, rather than on asset holdings. Direct expenses include the costs of examining and supervising Farmer Mac, while indirect expenses are the overhead costs “reasonably” related to FCA’s services. In general, the other federal financial regulators that regulate institutions similar to FCA’s use comparable methodologies to calculate assessments. The law requires that the assessments be apportioned “on a basis that is determined to be equitable by the Farm Credit Administration.” FCA’s current assessment regulations for banks, associations, and “designated other System entities” were developed in 1993 through the negotiated rulemaking process. Banks, associations, and the Farm Credit Leasing Services Corporation (Leasing Corporation) are assessed on the same basis (i.e., assets). According to an FCA official, the agency periodically reviews these rules but currently has no plans to modify them. FCA officials said that these rules are designed to equitably apportion the annual costs of supervising, examining, and regulating the institutions. For this reason, the methodology relies on asset “brackets” that are much like tax brackets and reflect economies of scale, since the costs of supervision rise as a regulated institution becomes larger; however, these costs do not increase as fast as asset growth. FCA “bills” the institutions annually, and the institutions pay their assessments on a quarterly basis. To calculate the assessments for banks, associations, and the Leasing Corporation, FCA first determines its annual operating budget, which could include a reserve for contingencies for the next fiscal year, then deducts the estimated assessments for Farmer Mac, other System entities, and any reimbursable expenses. What is left—the net operating budget—is the total amount that will be assessed. This amount is apportioned among the banks, associations, and the Leasing Corporation using a two-part formula. The net operating budget is divided into two components of 30 and 70 percent. (According to an FCA official, the 30/70 split was devised during the negotiated rulemaking process and represents the most equitable way to assess System institutions.) The first part of the assessment, covering 30 percent of the budget, is spread across institutions on the basis of each institution’s share of System risk-adjusted assets. For example, an institution whose assets equal 1 percent of System assets will have its assessment equal to 1 percent of this 30 percent of the FCA budget. The second part of an institution’s assessment is charged according to a schedule that imposes different assessment rates on assets over specified levels, with these marginal rates decreasing for higher levels of assets. For example, the assessment rate that an institution pays for its assets from over $100 million to $500 million is 60 percent of the assessment rate that it pays on its first $25 million in assets. Adding the 30-percent amount and the 70-percent amount together equals the general assessment amount. Table 4 shows the assessment rates for the eight-asset “brackets.” The assessment rates percentages are prescribed by FCA regulation. The general assessment may be subject to these adjustments: a minimum assessment fee, a supervisory surcharge, or both. The minimum fee of $20,000 applies only to institutions whose assessments are calculated at less than $20,000; these assessments are scaled upward, and no further charges are assessed. For institutions with assessments of more than $20,000, FCA may add a supervisory surcharge that reflects the institution’s financial and management conditions. The surcharge is based on the institution’s last supervisory examination rating. These ratings range from a high of 1 to a low of 5; a rating of 3, 4, or 5 can result in a surcharge ranging from 20 to 40 percent of the general assessment amount. The top-rated institutions (those rated 1 or 2) pay nothing over the general assessment. The variables in the formula allow FCA some flexibility in adjusting assessments to reflect its oversight costs. The formula not only reflects economies of scale but, by linking assessments with the financial and managerial soundness of the institutions, also seeks to ensure that the institutions that cost the most to supervise are paying their share. This approach relieves other entities within the System of bearing the cost of this additional oversight. FCA may adjust its assessments to reflect changes in its actual annual expenses and, if applicable, give institutions a credit against their next assessment or require them to pay additional assessments. Any credits are prorated on the basis of assessments paid by an institution. These credit adjustments are usually done at the end of the fiscal year. As required by law, FCA assesses Farmer Mac separately and differently from its primary market institutions. The law specifies that FCA’s assessment of Farmer Mac is intended to cover the costs of any regulatory activities and specifically notes a requirement to pay the cost of supervising and examining Farmer Mac. We could not identify any legislative history that addressed these provisions. FCA officials told us that they believed the difference between the statutory provisions for assessing banks, associations, and the Leasing Corporation and Farmer Mac is due to the difference in their assets—that is, unlike those institutions, Farmer Mac does not make loans. FCA developed the current assessment methodology for Farmer Mac in 1993. Farmer Mac’s assessment covers the estimated costs of regulation, supervision, and examination, but Farmer Mac is not assessed a charge for FCA’s reserve. The assessment includes FCA’s estimated direct expenses for these activities, plus an allocated amount for indirect or overhead expenses. In general, FCA uses the same estimated direct expenses and indirect expense calculations for Farmer Mac as for the “other System entities,” such as the Federal Farm Credit Banks Funding Corporation (Funding Corporation). Estimated direct expenses take into account the costs incurred in the most recent examination of Farmer Mac and any expected changes in these costs for the next fiscal year. We asked FCA officials if and how the assessment formula they use for Farmer Mac enables them to compensate for risks in Farmer Mac’s business activities. They explained that the amount assessed for direct expenses increases if additional examination time is needed. FCA officials also noted that, as their data show, direct costs can rise due to other factors. For example, from 1999 to 2001, FCA officials noted that they invested considerable resources in developing a risk-based capital rule for Farmer Mac. During this time, FCA incurred unique costs that increased Farmer Mac’s assessment for those years. A proportional amount of FCA’s indirect expenses—that is, those expenses that are not attributable to the performance of examinations—is allocated to Farmer Mac. This amount is calculated as a relationship between the budget for a certain FCA office and FCA’s overall expense budget for the fiscal year covered by the assessment. (The proportion for 2000 was 28.9 percent.) Multiplying the percentage by the estimated direct expenses attributable to Farmer Mac equals the amount of indirect expenses. The addition of the estimated direct expenses and indirect expenses equals the estimated amount to be assessed Farmer Mac for the fiscal year. Indirect expenses would include, for example, the cost of providing personnel services and processing travel vouchers for OSMO. At the end of each fiscal year, FCA may adjust its assessment to reflect any changes in actual expenses. Other entities in the Farm Credit System, such as the Funding Corporation, are assessed separately using a methodology similar to the one used for Farmer Mac. The assets of this group of institutions differ from those of the previously discussed entities that FCA regulates. These institutions are assessed for the estimated direct expenses involved in examinations, a portion of indirect expenses, and any amount necessary to maintain a reserve. FCA estimates direct expenses for each entity on the basis of anticipated examination time and travel costs for the next fiscal year. Allocations for indirect expenses are calculated as a percentage of FCA’s total budgeted direct expenses (excluding those for Farmer Mac) for the fiscal year of the assessment. As with its assessments of other entities in the System, FCA may adjust its assessments to reflect any changes in actual expenses at the end of the fiscal year. FCA and regulators of similar types of institutions use assessment formulas of varying complexity to assess the institutions they oversee. In general, they use relatively complex formulas for primary market institutions and less complex formulas for secondary market entities. FCA’s method for assessing banks, associations, and the Leasing Corporation, which are all primary market institutions, is similar to most other federal financial regulators (NCUA, OCC, and OTS) that oversee primary market institutions. Most of the regulators use complex formulas that take into account a variety of factors, including the regulator’s budget, the institution’s asset size and examination rating, and economies of scale (see fig. 5). Like FCA’s, these assessments generally include a fixed component that is based on an institution’s asset holdings, plus a variable component derived by multiplying asset amounts in excess of certain thresholds by a series of declining marginal rates. The assessment amount may then be adjusted on the basis of various factors—for example, the institution’s financial condition. Again like the FCA’s methodology, these formulas attempt to allocate regulatory costs in a way that reflects the agency’s actual cost of supervision. Institutions with a low examination rating pay an additional fee because they are likely to require more supervision than the top-rated institutions. NCUA and FHFB are the only regulators of primary market institutions that do not add a supervisory surcharge on the basis of an examination rating. However, NCUA does use a complex formula to determine an institution’s assessment amount, whereas FHFB uses a less complex formula. FHFB calculates assessments for the 12 FHLBanks on the basis of each bank’s total paid-in capital stock, relative to the total paid-in capital stock of all FHLBanks. FCA is the only primary market regulator that requires its institutions to pay a fixed minimum assessment amount (i.e., $20,000). Of the five other regulators we looked at, two—NCUA and OTS—reduce the assessments for qualifying small institutions. According to the report of the Assessment Regulations Negotiated Rulemaking Committee that developed the rule,the minimum assessment is required both to pay a share of FCA regulatory costs and as a necessary cost of doing business as a federally chartered System institution. The assessment methods of the two federal regulators that oversee secondary market entities are less complex than the methods applied to primary market institutions. For example, OFHEO’s method of assessing Fannie Mae and Freddie Mac, which is prescribed by law, is based on the ratio of each entity’s assets to their total combined assets. OFHEO does not regulate any other entities; thus, this simple formula readily meets the need to equitably apportion the agency’s operating costs. FCA administrative expenditures were lower in 2000 compared with 1996, due in part to reductions in staff because of System consolidation. Although administrative expenses are projected to increase for 2001 because of rising personnel and travel costs, they are expected to remain within the congressional spending ceiling. FCA is unique among federal financial institution regulators because it regulates both primary and secondary market entities. The methods FCA uses to assess the institutions it oversees are analogous to those used by virtually all of the regulators of similar institutions and are based on the types of assets the entities hold. FCA’s complex formula for assessing primary market institutions is comparable to the methods used by most regulators of other primary market institutions. These regulators oversee numerous entities of various sizes and complexities, and their complex assessment methods enable them to consider these attributes in assessing for the cost of examinations. The few secondary market entities, which include Farmer Mac, are all assessed using less complex methodologies. We received written comments on a draft of this report from the Chairman and Chief Executive Officer of FCA that are reprinted in appendix I. He agreed with the information presented in the draft report regarding FCA’s administrative spending between 1996 and 2000. FCA also provided technical comments that we incorporated where appropriate. The other federal financial regulators, except for OFHEO, provided technical comments on a draft excerpt of this report that we shared with them. We incorporated their technical comments into this report where appropriate. We are sending copies of this report to the Chairman of the Senate Committee on Agriculture, Nutrition, and Forestry; the Chairmen and Ranking Minority Members of the Senate Committee on Banking, Housing and Urban Affairs, the House Committee on Financial Services, and the House Committee on Agriculture; and Michael M. Reyna, Chairman and Chief Executive Officer of the Farm Credit Administration. The report will be available on GAO’s Internet home page at http://www.gao.gov. If you have any questions about this report, please contact me or M. Katie Harris at (202) 512-8678. Joe E. Hunter was a major contributor to this report.
The Farm Credit Administration (FCA) regulates the farm credit system. Administrative expenses, which accounted for about 97 percent of FCA's total operating expenses of $34.5 million in fiscal year 2000, are funded primarily by assessments on the institutions that make up the system, including the Federal Agricultural Mortgage Corporation (Farmer Mac). This report (1) analyses trends in administrative expenses for fiscal years 1996 through 2000 and (2) compares ways that FCA and other federal financial regulators calculate the assessments they need to fund their operations. GAO found that although FCA's administrative expenditures varied each year between 1996 and 2000, they remained below 1996 levels and stayed within congressionally imposed annual spending limits for each year during 1997 through 2000. Between 1996 and 2000, the agency experienced a decline in administrative spending of around $2 million, or 5.8 percent. Personnel costs were the largest single expense, consistently accounting for more than 80 percent of administrative spending; thus, a 15 percent staff reduction also provided the greatest overall savings. Unlike many government agencies whose operations are funded by taxpayers' money, the federal financial regulators are self-funded agencies that rely primarily on assessments from the entities they regulate. In calculating these assessments, FCA and the other federal financial regulators use separate methodologies for primary and secondary market entities.
You are an expert at summarizing long articles. Proceed to summarize the following text: The Chemical and Biological Defense Program was established in 1994 and develops defense capabilities to protect the warfighter from current and emerging chemical and biological threats. Specifically, its mission is “to enable the warfighter to deter, prevent, protect against, mitigate, respond to, and recover from CBRN threats and effects as part of a layered, integrated defense.” The CBDP Enterprise conducts research and develops defenses against chemical threats, such as cyanide and mustard gases, and biological threats, such as anthrax and Ebola, and tests and evaluates capabilities and products to protect military forces from them. The CBDP Enterprise comprises 26 organizations across DOD that determine warfighter requirements, provide science and technology expertise, conduct research and development and test and evaluation on capabilities needed to protect the warfighter, and provide oversight. Figure 1 shows the CBDP Enterprise organizations included in our review and their roles. The ability of the CBDP Enterprise to successfully implement its mission in a resource-constrained environment, according to the 2012 CBDP Business Plan, relies on the integrated management of responsibilities performed by these organizations. The following CBDP Enterprise organizations have key roles and responsibilities: The Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs, among other things, serves as the advisor to the Secretary of Defense for activities that combat current and emerging chemical and biological threats. The Deputy Assistant Secretary of Defense for Chemical and Biological Defense is responsible for Chemical and Biological Defense Program oversight activities, acquisition policy guidance, and interagency coordination. The Secretary of the Army is the Executive Agent for the Chemical and Biological Defense Program. Within the Army, the Assistant Secretary of the Army for Acquisition, Logistics and Technology and the Office of the U.S. Army Deputy Chief of Staff, G-8 serve as cochairs of the Army Executive Agent Secretariat and are responsible for, among other duties, coordinating and integrating research, development, test, and evaluation, and acquisition requirements of the military departments for DOD chemical and biological warfare defense programs and reviewing all funding requirements for the CBDP Enterprise. The Deputy Under Secretary of the Army for Test and Evaluation provides oversight, policy, governance and guidance to ensure timely, adequate, and credible test and evaluation for the Army and the CBDP Enterprise. The Director, Army Test and Evaluation Office, serves as the Test and Evaluation Executive for the CBDP Enterprise. The Program Analysis and Integration Office (PAIO) is the analytical arm of the CBDP Enterprise and is responsible for monitoring the expenditures of research, development, test, and evaluation activities. It provides analysis, review, and integration functions for the CBDP Enterprise. The Joint Program Executive Office for Chemical and Biological Defense oversees the total life-cycle acquisition management for assigned chemical and biological programs, among others. The Office of the Joint Chiefs of Staff, Joint Requirements Office for Chemical, Biological, Radiological, and Nuclear Defense (hereinafter referred to as the Joint Requirements Office) serves as a focal point to the Chairman of the Joint Chiefs of Staff for all chemical and biological issues, among others, associated with combating weapons of mass destruction, and supports the development of recommendations to the Secretary of Defense regarding combatant commanders’ chemical and biological requirements for operational capabilities, among others. The Joint Science and Technology Office for Chemical and Biological Defense (hereinafter referred to as Joint Science and Technology Office) oversees science and technology efforts in coordination with the military services’ research and development laboratories, to include efforts with other agencies, laboratories, and organizations. The CBDP Enterprise’s four primary research and development and test and evaluation facilities, as seen in figure 2, include the U.S. Army Edgewood Chemical Biological Center (hereinafter referred to as Edgewood), Aberdeen Proving Ground, Maryland; the U.S. Army Medical Research Institute of Infectious Diseases on the National Interagency Biodefense Campus, Ft. Detrick, Maryland; the U.S. Army Medical Research Institute of Chemical Defense, Aberdeen Proving Ground, Maryland; and the West Desert Test Center (hereinafter referred to as West Desert), Dugway Proving Ground, Utah. These facilities conduct research and development and test and evaluation of chemical and biological defense capabilities and are owned and operated by the U.S. Army and support the mission of the Chemical and Biological Defense Program. Additional information about DOD’s chemical and biological defense primary research and development and test and evaluation facilities can be found in appendix III. Figure 2 shows the location of the CBDP Enterprise’s primary research and development and test and evaluation facilities. The CBDP Enterprise’s plans—which are used as guidance to meet its mission—articulate infrastructure goals and identify the ways (i.e., the functions, roles and responsibilities, and business practices) to achieve them. These plans include the following: The 2012 Chemical Biological Defense Program Strategic Plan is intended to map the direction and articulate the outcomes that the CBDP Enterprise aims to achieve. The plan responds to evolving threats and the fiscal environment by setting a vision to align resources to meet four strategic goals: (1) equip the force to protect and respond to CBRN threats and effects; (2) prevent surprise by anticipating threats and developing new capabilities for the warfighter to counter emerging threats; (3) maintain the infrastructure—both physical and intellectual—the department requires to meet and adapt to current and future needs for personnel, equipment, and facilities within funding constraints; and (4) lead CBDP Enterprise components in integrating and aligning activities. The 2012 Chemical Biological Defense Program Business Plan describes the ways in which the CBDP Enterprise intends to meet the four strategic goals identified in the 2012 CBDP Strategic Plan. The 2012 CBDP Business Plan assigns responsibility and provides the structures and processes to implement the 2012 CBDP Strategic Plan. PAIO’s 2014 CBDP Infrastructure Implementation Plan, endorsed by the Office of the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs, articulates the process by which the CBDP Enterprise intends to review its physical infrastructure to support the identification of required infrastructure and determine whether any potentially duplicative or redundant infrastructure capabilities exist within the CBDP Enterprise. PAIO’s 2008 Non-Medical Physical Infrastructure Capabilities Assessment was an assessment conducted by PAIO on the capabilities of the CBDP Enterprise’s existing infrastructure to support critical mission areas. The assessment was requested by the Special Assistant, Chemical and Biological Defense and Chemical Demilitarization Programs, to support critical mission areas. The study made four recommendations to the CBDP Enterprise: 1. Identify its required research and development and test and evaluation infrastructure capabilities to support its mission.2. Create a joint strategic vision for military construction investment across all elements of the CBDP Enterprise. 3. Establish a military construction program aligned with the joint strategy and processes integrating goals, objectives, and validation across the CBDP Enterprise. 4. Address the use of project validation, cost/benefit analysis, and investment business case issues for infrastructure decisions. The CBDP Enterprise annual planning process is designed to support decision making by program leadership regarding investments in research and development. This process is intended to incorporate chemical and biological threat information and chemical and biological defense warfighter requirements into the formulation of CBDP Enterprise strategic programming guidance for research and development investment decisions. The CBDP Enterprise’s 2014 risk assessments are based on DOD’s 2001Quadrennial Defense Review Report risk framework. The four dimensions of the risk framework are as follows: Force management—the ability to recruit, retain, train, and equip sufficient numbers of high-quality personnel and sustain the readiness of the force while accomplishing its many operational tasks. Operational—the ability to achieve military objectives in a near-term conflict or other contingency. ODASD (CBD) officials told us that, since the recommendations were made, they have expanded the recommendations to include all infrastructure investments, not just infrastructure funded by military construction appropriations. Future challenges—the ability to invest in new capabilities and develop new operational concepts needed to dissuade or defeat mid- to long-term military challenges. Institutional—the ability to develop management practices and controls that use resources efficiently and promote the effective operation of the defense establishment. Together, the results from the four dimensions of the risk framework are expected to allow DOD to consider tradeoffs among fundamental resource constraints. The CBDP Enterprise has taken some actions, such as the development of infrastructure goals, to address its infrastructure needs; however, after nearly 7 years, the CBDP Enterprise has not fully achieved its goal to address the 2008 PAIO recommendation that it identify required infrastructure capabilities to ensure alignment of its infrastructure to its mission to address threats. At that time, the CBDP Enterprise made no plan and did not make infrastructure a priority to address the recommendation. CBDP Enterprise officials acknowledge the importance, validity, and necessity of addressing the 2008 recommendation and recognized these points in their 2012 CBDP Business Plan. However, the CBDP Enterprise has made limited progress in achieving this infrastructure goal because CBDP Enterprise officials told us that they were focused on higher priorities and had no CBDP Enterprise-wide impetus to address the infrastructure recommendations. OASD (NCB) previously identified the need for an entity that has the responsibility and level of authority needed to ensure achievement of this infrastructure goal, but DOD has not designated such an entity with CBDP Enterprise- wide responsibility and authority to lead this effort, nor has it established timelines and milestones for doing so. The CBDP Enterprise has taken actions, but has not fully achieved its goal to address the 2008 PAIO recommendation to identify required infrastructure (intellectual and physical) capabilities to address current and emerging chemical and biological threats. According to ODASD (CBD) officials, the CBDP Enterprise recognizes the importance, validity, and necessity of addressing this (and other) PAIO recommendations from the 2008 study, which would transform the way the CBDP Enterprise manages its infrastructure. However, at that time, CBDP Enterprise officials did not make a plan or set infrastructure as a priority to address the recommendation. In addition, CBDP Enterprise officials told us that they have not addressed this recommendation because they were focused on higher priorities. Since the 2008 PAIO recommendation, OASD (NCB) issued the 2012 CBDP Strategic Plan, which, for the first time, established maintaining infrastructure as a strategic goal. Additionally, OASD (NCB) issued the 2012 CBDP Business Plan, which proposed an assessment of CBDP’s required knowledge and skill capabilities of its personnel and physical infrastructure capabilities across the CBDP Enterprise to meet this strategic goal. In addition to these actions, the Deputy Assistant Secretary of Defense for Chemical and Biological Defense requested that the National Research Council of the National Academy of Sciences conduct a study to identify the science and technology capabilities needed for the CBDP Enterprise to meet its mission. However, it was not until 2014 and 2015 that the Joint Science and Technology Office and PAIO, respectively, initiated studies to address the 2012 CBDP Business Plan proposal and 2008 recommendation to identify its required infrastructure capabilities. Figure 3 depicts the CBDP Enterprise’s limited progress, as shown by the gap from 2008 to 2014, to complete its goal to identify its required infrastructure capabilities. In December 2014, the Joint Science and Technology Office initiated a study of the CBDP Enterprise’s existing intellectual infrastructure to (1) determine the knowledge and skill capabilities of its personnel and (2) identify the required capabilities of its personnel to implement its mission. According to Joint Science and Technology Office officials, they are using the 18 warfighter core capabilities—the framework for meeting the program’s mission—to assist in identifying the CBDP Enterprise’s required knowledge and skill capabilities for personnel. (See app. IV for additional information about the 18 core capabilities.) These officials told us that they are working with CBDP’s Senior Scientist Board and the leadership of the three primary CBDP research and development facilities to identify the required knowledge and skill capabilities for the CBDP Enterprise’s personnel.the proposed methodology will help them identify expertise and leadership that currently exists within the primary research and development facilities. The methodology also will help them identify the required knowledge and skill capabilities of its personnel to (1) ensure that research and development products are making progress towards project goals and (2) address the 18 warfighter core capabilities. In addition, Joint Science and Technology Office officials stated that their study to identify required knowledge and skill capabilities of the CBDP Enterprise’s personnel will also help them determine any existing capabilities gaps. As of January 2015, the Joint Science and Technology Office’s infrastructure study produced a presentation on definitions for infrastructure-related issues and a proposed methodology to determine According to the official overseeing this study, how required knowledge and skill capabilities of the CBDP Enterprise’s personnel will be maintained. However, the office does not have an end date for this study or a timeline and milestones to assess its progress. In addition, PAIO developed a physical infrastructure implementation plan in July 2014 to study the CBDP Enterprise’s existing physical infrastructure capabilities. The study includes a timeline and milestones for various actions, including that, from July 2015 through February 2016, PAIO establish an inventory of all the physical infrastructure capabilities within the CBDP Enterprise and conduct an analysis of these capabilities to determine their specific functions and the CBDP Enterprise’s level of reliance on these capabilities. According to PAIO officials, this analysis will help the CBDP Enterprise achieve its goal by determining its required physical infrastructure. ODASD (CBD) officials acknowledged the need to identify required knowledge and skills capabilities of the CBDP Enterprise’s personnel and physical infrastructure capabilities to ensure alignment of the Army-owned infrastructure to address current and emerging chemical and biological threats. PAIO officials stated that the information gained from their study and from the Joint Science and Technology Office study will need to be combined to gain a comprehensive understanding of the status of CBDP’s infrastructure. Specifically, they stated that the studies will provide additional information to CBDP Enterprise leadership on the existing infrastructure capabilities to help determine required infrastructure and identify any potential gaps to address threats. The limited progress in fully achieving the CBDP Enterprise goal to identify required infrastructure capabilities, and transform the way infrastructure is managed, is because OASD (NCB) has not identified and designated an entity that has the responsibility and authority needed to lead the effort to ensure the achievement of this and other CBDP Enterprise goals (e.g., the other three 2008 PAIO recommendations, as identified in the Background section of this report, and the goal established in the 2012 CBDP Business Plan—an assessment of the CBDP Enterprise’s required infrastructure capabilities), and no timelines or milestones have been established for their completion. Key practices for federal agencies to address challenges in achieving successful transformation of their organizations, particularly in the implementation phase, call for (1) establishing a dedicated authority responsible for the transformation’s day-to-day management to ensure it receives the full- time attention needed to be sustained and successful and (2) establishing timelines and milestones for achieving goals. The CBDP Enterprise does not have a dedicated entity with the responsibility and authority needed to lead the effort to ensure the achievement of its infrastructure goals. The Strategic Portfolio Review assesses, among other things, how efficiently the CBDP Enterprise is maintaining its infrastructure. ODASD (CBD) officials confirmed that, initially, the Army’s PAIO was designated as the Infrastructure Manager for the CBDP Enterprise. However, according to PAIO and ODASD (CBD) officials, PAIO does not have the authority to manage the CBDP Enterprise’s infrastructure. A decision subsequently was made by ODASD (CBD) that PAIO would no longer serve in this capacity, but would continue in its role to provide infrastructure analysis and integration for the CBDP Enterprise. In July 2014, ODASD (CBD) officials told us the U.S. Army and individual installation leadership were designated as Infrastructure Managers over intellectual and physical infrastructure capabilities for the CBDP’s Enterprise’s primary research and development and test and evaluation facilities under their purview. However, individual installation leadership does not have the responsibility and authority to maintain CBDP Enterprise-wide visibility and oversight to ensure that CBDP Enterprise- wide infrastructure goals are achieved. A dedicated authority, such as an entity responsible for the day-to-day management of the transformation, could lead the effort to help ensure the CBDP Enterprise receives the full- time attention needed to achieve and sustain its goals to help ensure progress is made as intended. By identifying and designating an entity with the responsibility and authority to lead the effort to set priorities, make timely decisions, and move quickly to implement leadership decisions for ensuring the timely achievement of the CBDP Enterprise’s goals, such as identifying required infrastructure capabilities, the CBDP Enterprise would be better positioned to support resource decisions regarding the infrastructure capabilities needed to address threats. Additionally, no timelines and milestones were established to complete the recommendations identified in the 2008 PAIO study or the goals established in the 2012 CBDP Business Plan or the 2014 Joint Science and Technology Office study to identify required knowledge and skill capabilities in its personnel because no entity has responsibility and authority needed to lead the effort to implement this and other CBDP Enterprise goals. Moreover, CBDP Enterprise officials told us that they were focused on higher priorities during this time, such as funding for medical countermeasures capabilities. As a result, the recommendation made nearly 7 years ago and subsequent goals to address the recommendation have not been implemented and there is no timeline for their completion. According to key practices for transforming organizations, it is essential to set and track timelines to build momentum and to demonstrate progress from the beginning. Establishing timelines and milestones for achieving these goals (e.g., the 2008 PAIO recommendations and the goal established in the 2012 CBDP Business Plan), would better position the CBDP Enterprise to track its progress towards meeting its infrastructure goals, pinpoint performance shortfalls and gaps, and suggest midcourse corrections to ensure progress is being made to address current and emerging threats and meet its mission. Further, identifying and designating an entity and establishing timelines and milestones would better position the CBDP Enterprise to address any existing challenges in transforming the way the CBDP Enterprise manages its infrastructure and completing its goal to identify the infrastructure capabilities needed to meet its mission. The CBDP Enterprise has taken some actions to identify, address, and manage potential fragmentation, overlap, and duplication. Further, during the course of our review, in January 2015, PAIO began a study of CBDP Enterprise infrastructure to identify potential duplication. However, PAIO does not plan to identify, request, or consider information from existing infrastructure studies from other federal agencies. By identifying, requesting, and considering information from existing infrastructure studies from other federal agencies working in this area, PAIO will be better positioned to meet DOD’s goal to avoid duplication by having more information about existing infrastructure across the federal government for use by the CBDP Enterprise to support its work. Based on our analysis of information from each of the four primary research and development and test and evaluation facilities and ODASD (CBD) on infrastructure capabilities, the CBDP Enterprise’s primary research and development and test and evaluation facilities have taken some actions to identify, address, and manage fragmentation, overlap, and duplication. For example, the CBDP Enterprise has a research and development project-selection process in place, managed by the Joint Science and Technology Office, to help reduce the potential for fragmentation and overlap of CBDP Enterprise infrastructure and duplication of efforts within the research and development component. The Joint Science and Technology Office reviews and selects the projects that support the CBDP Enterprise mission at the CBDP Enterprise’s primary research and development facilities. By having one entity (the Joint Science and Technology Office) make decisions regarding the selection of research and development projects to meet its mission, the CBDP Enterprise is able to help reduce the potential for fragmentation and overlap of its infrastructure and duplication of efforts within the research and development component of the CBDP Enterprise. In addition, the U.S. Army Medical Research and Materiel Command is piloting a Competency Management Initiative, among other things, to identify any potential duplication and gaps across the knowledge and skills of command personnel. The initiative, which includes the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) and the U.S. Army Medical Research Institute of Chemical Defense (USAMRICD), examines intellectual capabilities and competencies needed to meet the mission based on chemical and biological threats. U.S. Army Medical Research and Materiel Command officials expect results from this initiative in 2015. Furthermore, the potential for duplication is reduced because the missions of the CBDP Enterprise’s four primary research and development and test and evaluation facilities are different. For example, USAMRICD focuses on medical chemical defense, USAMRIID focuses on medical biological defense, Edgewood focuses on nonmedical materiel solutions to chemical and biological threats, and West Desert conducts developmental and operational testing and evaluation. The difference in missions reduces the potential for fragmentation, overlap, and duplication within the CBDP Enterprise. In addition, in responding to our questionnaire, officials at CBDP’s four primary facilities told us they consider potential infrastructure fragmentation, overlap, and duplication when determining whether additional infrastructure capabilities are needed to support their work. For example, officials found the potential for duplication during the planning phase for a new facility, which would house animals for future research for USAMRIID on the National Interagency Biodefense Campus at Fort Detrick, Maryland. A set of studies on medical countermeasure test and evaluation facility requirements, conducted for the U.S. Army Assistant Chief of Staff for Facilities, Planning and Programming Division, determined, among other things, that there was sufficient capacity for holding animals in existing facilities that conduct research with animals. The study resulted in the cancellation of USAMRIID’s plans to construct a new medical countermeasure test and evaluation facility, including a holding facility for animals (vivarium), with an overall estimated cost savings of about $600 million, according to USAMRIID officials. During the course of our review, PAIO began a study in January 2015 of the CBDP Enterprise’s infrastructure, among other things, to inventory CBDP Enterprise infrastructure to support identification of (1) required infrastructure capabilities and (2) any potential duplication and unnecessary redundancy across the CBDP Enterprise’s primary research and development and test and evaluation facilities’ physical infrastructure. This study by PAIO will be the first CBDP Enterprise-wide review of infrastructure since its 2008 review of nonmedical physical infrastructure investments. PAIO developed an infrastructure implementation plan in July 2014 to guide its study, among other things, to determine whether there are any potentially duplicative or unnecessary redundant infrastructure capabilities. PAIO plans to inventory CBDP Enterprise infrastructure from July 2015 to October 2015. In addition, PAIO plans to analyze the infrastructure information for potential duplication from October 2015 to February 2016. Its infrastructure implementation plan states that there can be value in some redundancy of infrastructure across the facilities and that the definition of duplication and unnecessary redundancy, which will be established during the study, will take this into account. For example, West Desert at Dugway Proving Ground and Aberdeen Test Center each has aircraft decontamination pads to support their testing and evaluation mission. However, if an aircraft became contaminated with a chemical or biological agent, the facilities have the infrastructure capability to decontaminate a civilian or military aircraft during a contingency or national emergency. According to West Desert officials, having the infrastructure at both facilities allows aircraft coming from the Pacific or Europe to be handled and decontaminated without the additional risk of continental travel and refueling. However, during the course of our review, we found potential duplication or redundant swatch testing infrastructure capabilities that may not add value to CBDP’s test and evaluation infrastructure capabilities. Specifically, West Desert and Edgewood both have the infrastructure to conduct testing of swatch material for chemical agents. In addition, the Quality Evaluation Facility at Pine Bluff Arsenal, Arkansas, a non-CBDP Enterprise DOD facility, also has swatch testing infrastructure capabilities. For example, officials from the Joint Program Executive Office for Chemical and Biological Defense, one of the swatch testing customers for all three facilities, told us that its current workload would not completely fill the capacity of either of the CBDP facilities, which could indicate potential duplication if other DOD or private sector customers did not require services to ensure each facility is at full capacity. According to Edgewood and West Desert officials, having swatch testing infrastructure capabilities in both locations enables efficient transition of technology and continuity of data from early research and development at Edgewood to advanced development and operational testing by West Desert. Officials from PAIO stated that their study will review similar infrastructure examples, but within the CBDP Enterprise only, to determine what infrastructure, if any, is duplicative or redundant and what infrastructure, if any, is necessary redundancy. As part of the study methodology, PAIO plans to obtain input from the Joint Science and Technology Office, the Joint Program Executive Office for Chemical and Biological Defense, and the Deputy Under Secretary of the Army for Test and Evaluation and provide the results of its infrastructure inventory and any potential duplication found to the primary research and development and test and evaluation facilities. Once the results are known later in 2015, facility leadership is then expected to provide a rationale for sustaining any potentially duplicative or redundant infrastructure capabilities. Finally, in October 2015, the study’s methodology provides that PAIO will analyze any additional information from facility leadership to determine which infrastructure capabilities are potentially duplicative or redundant. According to PAIO and ODASD (CBD) officials, the study will provide information to CBDP Enterprise leadership—the Office of the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs and the Executive Agent—to support their decisions on any potential infrastructure efficiencies and to support oversight of investment. PAIO plans to identify potential duplication within the CBDP Enterprise; however, PAIO does not plan to identify, request, or consider information from existing studies about infrastructure capabilities of other federal agencies with research and development or test and evaluation infrastructure to study chemical and biological threats. Additional information about other federal agencies’ infrastructure capabilities may enhance PAIO’s review of CBDP Enterprise infrastructure and potential duplication by providing more information on what infrastructure other federal agencies in this field have to support their work. For example, the Department of Health and Human Services’ Centers for Disease Control and Prevention, the National Institutes of Health’s National Institute of Allergy and Infectious Diseases, Integrated Research Facility; and the Department of Homeland Security’s National Biodefense Analysis and Countermeasures Center have infrastructure and study chemical or biological threats. Information about existing infrastructure inventory, such as their capability to conduct specialized research of biological agents with a known potential for aerosol transmission or that may cause serious and potentially lethal infections, and whether that infrastructure is available for use to help avoid duplication within the CBDP Enterprise, would help bolster PAIO’s study. In addition, examples of our prior work on fragmentation, overlap, and duplication found that multiple agencies were involved in federal efforts to combat chemical or biological threats. We also found that it may be appropriate for multiple agencies or programs to be involved in the same area of work due to the nature or magnitude of the federal effort; however, multiple programs and capabilities may also create inefficiencies, such as the examples found in our prior reports. For example, in 1999, prior to the anthrax attacks in the United States, we found ineffective coordination among DOD and other federal agencies with chemical and biological programs that could result in potential gaps or overlap in research and development programs. Further, we found in September 2009 that there was no federal entity responsible for oversight of the expansion of high-containment laboratories—those designed for handling dangerous pathogens and emerging infectious diseases— across the federal government. We also found in June 2010 that the mission responsibilities and resources needed to develop a biosurveillance capability—the ability to provide early detection and situational awareness of potentially catastrophic biological events—were dispersed across a number of federal agencies, creating potential inefficiencies and overlap and duplication of effort. Finally, in May 2014, we found that the Department of Health and Human Services coordinates and leads federal efforts to determine CBRN medical countermeasure priorities and the development and acquisition of CBRN medical countermeasures for the civilian sector, primarily through the Public Health Emergency Medical Countermeasures Enterprise—an interagency body that includes other federal agencies with related responsibilities. We made a number of recommendations to address these issues and, as of January 2015, about one-third have been partially or fully implemented. (See app. V for additional information about the findings, recommendations, and agency actions taken and see the Related GAO Products section at the end of this report for other reports on high- containment laboratories and biodefense.) PAIO officials told us that they identified and requested some information from other federal agencies to support the development of PAIO’s infrastructure implementation plan. However, according to PAIO and ODASD (CBD) officials, PAIO does not have the authority and resources to require other federal agencies to provide information about their infrastructure capabilities. DOD Directives 5134.08 and 3200.11 outline policy goals, among other things, for avoiding duplication, such as using existing DOD and other federal agency facilities and conducting certain oversight activities aimed at avoiding unnecessary duplication within the CBDP Enterprise. According to CBDP Enterprise officials, these types of deliberate data sharing arrangements can be enhanced by interagency agreements that are directed and supported at more senior levels within each department. Identifying, requesting, and considering information from existing infrastructure studies from other federal agencies about their chemical and biological infrastructure capabilities would not necessarily require new authority. PAIO would be better positioned to support the CBDP Enterprise’s effort to meet DOD’s goal to avoid duplication by determining what infrastructure is used by other federal agencies and whether that infrastructure could be available for use by the CBDP Enterprise to support its work in this area. Until PAIO determines what infrastructure capabilities exist outside of the CBDP Enterprise, there is potential for unnecessary duplication and inefficient and ineffective use of its government resources. The CBDP Enterprise used data on chemical and biological threats from the intelligence community and plans to use threat data and the results from risk assessments first conducted in 2014 by the Joint Requirements Office and ODASD (CBD) to support planning for its future portfolio planning process for research and development. However, the CBDP Enterprise has not updated its guidance and planning process to include specific responsibilities and timeframes for risk assessments. ODASD (CBD) tasked the Joint Requirements Office to conduct an operational risk assessment of warfighter chemical and biological defense requirements to support the CBDP Enterprise’s future years’ portfolio planning process, according to ODASD (CBD) officials. The assessment was based on threat information from the Defense Intelligence Agency’s Chemical, Biological, Radiological, and Nuclear Warfare Capstone Threat Assessment, a survey, and DOD guidance to determine the level of risk DOD is willing to accept in protecting its forces against chemical and biological threats under various operational conditions. CBDP Enterprise officials stated that they plan to use results from the piloted risk assessment during Phase I of their annual portfolio planning cycle, as stated in the 2012 CBDP Business Plan. Phase I includes a review of threats and risk analyses to support the development of strategic investment guidance and focus areas by the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs. For example, the guidance may include specific chemical or biological threats or defense capabilities that the program leadership wants the CBDP Enterprise to address, which then guides the types of scientific and technology proposals the research and development facilities will submit to support CBDP Enterprise goals. This investment program guidance is then to be used by the CBDP Enterprise organizations to focus the development of capabilities to counter threats. When they conducted the pilot risk assessments, the Joint Requirements Office and ODASD (CBD) used a modified version of DOD’s 2001 Quadrennial Defense Review Report risk framework—force management, future challenges, operational risk, and institutional risks—and guidance from the 2012 CBDP Strategic Plan. For its assessment of current and future operational risk, the Joint Requirements Office defined operational risk as the ability of the current force to execute strategy successfully within acceptable human, materiel, financial, and strategic costs. To conduct the operational risk assessment, the Joint Requirements Office developed an operationally driven methodology that consisted of six interrelated elements. The Joint Requirements Office used information from five of the elements—a joint assessment, survey, analysis, intelligence, and subject- matter expertise—to identify the topics of the tabletop exercise. Information from the sixth element—other exercises and operational evaluations, specific threats, potential gaps, potential risks, or the construct of potential threats on the battlefield—was used to develop scenarios for the tabletop exercise. According to Joint Requirements Office officials, the purpose of the tabletop exercise was to gain an understanding of the chemical and biological operational defense capabilities against the most demanding and dangerous threats. The tabletop exercise was conducted through a series of action-reaction- counteraction sequences for each scenario. Officials facilitated discussions on military defense and key observations on defensive capabilities among CBDP Enterprise members, operational planners, and other subject-matter experts during the tabletop exercise using the framework of the CBDP Enterprise’s 18 warfighter core capabilities categorized into four areas—Sense, Shape, Shield, and Sustain. (See app. IV for additional information about the core capabilities.) The Joint Requirements Office provided ODASD (CBD) with information about lessons learned from the tabletop exercise and other analyses, and identified other operational scenarios to support future operational risk assessments. According to ODASD (CBD) officials, the operational risk assessment provided recommendations and new information on the use of defense capabilities in an operational setting to CBDP Enterprise leadership to support future planning about the strategic direction of the CBDP Enterprise in addressing chemical and biological threats. Also, in 2014, ODASD (CBD) conducted its own assessment of force management and institutional risk to the CBDP Enterprise. A separate risk assessment of future challenges—the fourth risk area in the 2001Quadrennial Defense Review Report’s framework—was not conducted. According to ODASD (CBD) officials, future challenges were incorporated into the operational and institutional risk assessments by including planned future capabilities against future threats as well as the development of those future capabilities, respectively. To assess force management risk, the office assessed “equipping the force.” Specifically, officials assessed 23 systems used by the military forces that were employed in the Joint Requirements Office’s operational risk assessment. The focus of the force management risk assessment was to identify current or planned capabilities that did not meet the force planning According to the results of the assessment, there were construct levels.no unacceptably high risks identified in equipping the force that needed to be address in fiscal year 2016–2020 program guidance. The assessment indicated the programs associated with the 23 systems appear to not pose an unacceptable risk. To assess the second area of risk— institutional risk—officials collected data on the CBDP Enterprise’s infrastructure and processes. The intent of the pilot infrastructure risk assessment was to identify unacceptably high-risk areas or concerns that would need additional guidance and be addressed during the fiscal year 2016–2020 planning cycle. ODASD (CBD) officials did not find any critical shortfalls in research and development or test and evaluation infrastructure or identify unacceptable risk. However, the assessment found some challenges in the process of moving capabilities from development to production. In addition, the results confirmed the difficulty of identifying shortfall risks in the CBDP Enterprise infrastructure because the primary research and development facilities are funded by proposal rather than by facility, thus requiring future risk assessments to look beyond the infrastructure that exists to determine whether unacceptable risk exists. ODASD (CBD) officials stated that they expect the results of the risk assessments to support the CBDP Enterprise’s future investment for research and development of chemical and biological defense capabilities. The CBDP Enterprise’s guidance and planning process does not include who will conduct and participate in risk assessments and when those assessments will be conducted. Federal standards for internal control state that, over time, management should continually assess and evaluate its internal control to assure activities being used are effective and updated when necessary. In addition, decision makers should identify risks associated with achieving program objectives, analyze them to determine their potential effect, and decide how to manage the risk and identify what actions should be taken. The standards also call for written procedures, to better ensure leadership directives are implemented. However, which organizations within the CBDP Enterprise are responsible for conducting and participating in risk assessments and when the assessments will be conducted to support the portfolio planning process for research and development investment is not outlined in the CBDP Enterprise’s guidance on roles and responsibilities or included in its planning process. Specifically, according to DOD Directive 5160.05E, the Joint Requirements Office is “responsible for collaborating with appropriate Joint Staff elements” on, among other things, chemical and However, the guidance does not explicitly biological risk assessment.identify which organizations within the CBDP Enterprise are responsible for conducting and participating in risk assessments. The 2012 CBDP Business Plan identifies the Joint Requirements Office as the primary organization responsible for planning chemical and biological risk assessments for the CBDP Enterprise. Further, the plan includes steps in its planning process to review threats and risk analyses, but does not specify when risk assessments will be conducted. Without written procedures on who will conduct or participate in risk assessments and the use of DOD’s risk framework, there is no guarantee that risk assessments will be conducted or when they will be conducted. ODASD (CBD) and Joint Requirements Office officials stated that they plan to conduct additional risk assessments in the future, as reported to Congress, because of the increasing chemical and biological threats and the challenges of the austere fiscal environment. However, the use of risk assessments by the CBDP Enterprise has not been fully institutionalized because the CBDP Enterprise has not updated its guidance on roles and responsibilities and its planning process because this is the first year that risk assessments were conducted. According to ODASD (CBD) officials, updating the roles and responsibilities guidance and related planning process would be beneficial, but they have not done so because the CBDP Enterprise is evaluating the results and lessons learned from the pilot. As of March 2015, ODASD (CBD) and Joint Requirements Office officials had not formally committed to updating such guidance or established a time frame for doing so to fully institutionalize the use of risk assessments. Without updated guidance, the CBDP Enterprise will continue to rely on the Deputy Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs to request risk assessments, rather than having the assessments occur at established times during the investment planning process. Written guidance, as called for by federal standards for internal control, would better ensure that leadership directives are implemented as intended. Written guidance that identifies which CBDP Enterprise entities are responsible for conducting and participating in risk assessments and when such assessments are to be conducted would help ensure that risk assessments are conducted as intended. In this way, new information from the results of the tabletop exercise from the risk assessment about how defense capabilities, such as 1 of the 18 warfighter core capabilities, are used in an operational setting would better position the CBDP Enterprise to prioritize future research and development investments. Going forward, addressing internal control standards by updating its guidance and the planning process to fully institutionalize the use of risk assessments would support planning, help ensure that the CBDP Enterprise leadership directives are implemented, and end dependence upon any particular agency official to request risk assessments to support future investment planning. The CBDP Enterprise has taken a number of actions in recent years to address chemical and biological defense research and development and test and evaluation infrastructure, but initially did not develop a plan to address the 2008 PAIO recommendation or make infrastructure a priority. While the CBDP Enterprise should continue to address its priorities, it remains important that it also ensures that its infrastructure is aligned to meet its mission given ever-changing threats. Additional actions would help the CBDP Enterprise to more effectively and efficiently identify, align, and manage DOD’s chemical and biological defense infrastructure. By identifying and designating an entity with the responsibility and authority to lead the effort for ensuring the timely achievement of the CBDP Enterprise’s infrastructure goals to identify required infrastructure capabilities and by establishing timelines and milestones to implement the 2008 PAIO recommendations and the goal established in the 2012 CBDP Business Plan, the CBDP Enterprise would be better positioned to align its infrastructure to meet its mission to address threats. Thus, the CBDP Enterprise would be able to determine whether its infrastructure is properly aligned to meet its mission to address current and emerging chemical and biological threats. Implementing the 2008 PAIO recommendation that the CBDP Enterprise identify its required infrastructure capabilities is an important first step in identifying potential infrastructure duplication that may exist across the CBDP Enterprise. By identifying, requesting, and considering information from existing infrastructure studies of other federal agencies about their chemical and biological infrastructure capabilities, PAIO may be better positioned to enhance its study by providing additional information, for example, about infrastructure capability and the availability of facilities, to help the CBDP Enterprise avoid potential infrastructure duplication and gain potential efficiencies by using those agencies’ existing infrastructure. Finally, the CBDP Enterprise can capitalize on its progress made in 2014, when the Joint Requirements Office and ODASD (CBD) conducted risk assessments, by updating the roles and responsibilities guidance in DOD Directive 5160.05E and the CBDP Enterprise’s planning process to identify which organizations are responsible for conducting and participating in risk assessments and when they would occur. By updating guidance and the planning process, the CBDP Enterprise can fully institutionalize the use of risk assessments and not depend on an individual official to request risk assessments. Fully institutionalizing the use of risk assessments would support CBDP Enterprise planning and may provide new information about chemical and biological defense capabilities to further prioritize the CBDP Enterprise’s future research and development investments. We are making five recommendations to improve the identification, alignment, and management of DOD’s chemical and biological defense infrastructure. To help ensure that the CBDP Enterprise’s infrastructure is properly aligned to address current and emerging chemical and biological threats, we recommend that the Secretary of Defense direct the appropriate DOD officials to take the following two actions: identify and designate an entity within the CBDP Enterprise with the responsibility and authority to lead the effort to ensure achievement of the infrastructure goals (e.g., the four 2008 PAIO recommendations, including the recommendation that the CBDP Enterprise identify its required infrastructure capabilities, and the goal established in the 2012 CBDP Business Plan), and establish timelines and milestones for achieving identified chemical and biological infrastructure goals, including implementation of the 2008 PAIO recommendation that the CBDP Enterprise identify its required infrastructure capabilities. To enhance PAIO’s ongoing analysis of potential infrastructure duplication in the CBDP Enterprise and gain potential efficiencies, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics to identify, request, and consider any information from existing infrastructure studies from other federal agencies with chemical and biological research and development and test and evaluation infrastructure. To fully institutionalize the use of risk assessments to support future investment decisions, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics to take the following two actions: update the roles and responsibilities guidance in DOD Directive 5160.05E to identify which organizations are responsible for conducting and participating in CBDP Enterprise risk assessments, and update the CBDP Enterprise’s portfolio planning process, to include when risk assessments will be conducted. In commenting on a draft of this report, DOD concurred with all five of our recommendations and discussed actions it is taking and plans to take to implement them. DOD concurred with our first recommendation to identify and designate an entity within the CBDP Enterprise with the responsibility and authority to lead the effort to ensure achievement of the infrastructure goals (e.g., the four 2008 PAIO recommendations, including the recommendation that the CBDP Enterprise identify its required infrastructure capabilities, and the goal established in the 2012 CBDP Business Plan). The department concurs that an entity needs to lead the effort to ensure achievement of the infrastructure goals. Further, OASD (NCB) officials believe that these responsibilities and authorities are currently in place under existing laws and regulations. The 2012 Chemical and Biological Defense Program (CBDP) Strategic Plan identified one of the four strategic goals of CBDP as “to maintain infrastructure to meet and adapt current and future needs for personnel, equipment, and facilities within funding constraints.” To achieve this goal, OASD (NCB) and the U.S. Army, as the Executive Agent for Chemical and Biological Defense, share responsibility to ensure achievement of CBDP’s strategic infrastructure goals in close collaboration and coordination with the infrastructure managers (i.e., the individual installation commanders and directors of the facilities). According to OASD (NCB) officials, the department is in the process of revising DOD Directive 5160.05E and will ensure that the directive appropriately captures the roles and responsibilities related to CBDP infrastructure capabilities. We believe these actions, if fully implemented, would address our recommendation. DOD also concurred with our second recommendation to establish timelines and milestones for achieving identified chemical and biological infrastructure goals, including implementation of the 2008 PAIO recommendation that the CBDP Enterprise identify its required infrastructure capabilities. DOD officials agree that the most effective means of ensuring CBDP infrastructure goals are achieved is to set realistic timelines and milestones. According to OASD (NCB) officials, the CBDP Enterprise is undertaking a thoughtful effort to identify the infrastructure capabilities necessary to successfully complete its mission. The CBDP Enterprise solicited support from the National Research Council of the National Academies of Science to identify what science and technology core capabilities need to be in place within DOD laboratories to support CBRN research, development, test, and evaluation. The CBDP Enterprise also is in the midst of internal reviews of both current infrastructure capabilities and those that are needed to fulfill mission requirements. The combined results of these studies will enable the CBDP Enterprise to align its core capabilities with the necessary supporting infrastructure, and to develop implementation and sustainment plans with timelines and milestones for required CBDP infrastructure capabilities and the studies will consider GAO’s recommendation on this issue. We believe that if these studies are completed and implementation and sustainment plans are developed with established timelines and milestones, then these actions would address our recommendation. DOD concurred with our third recommendation to identify, request, and consider any information from existing infrastructure studies from other federal agencies with chemical and biological research and development and test and evaluation infrastructure. OASD (NCB) officials said the department agrees that information from existing federal chemical and biological infrastructure studies should be considered as inputs to the CBDP Enterprise infrastructure analysis efforts. They added that DOD maintains strong partnerships with the Departments of Homeland Security and Health and Human Services, which will facilitate DOD’s accomplishment of this recommendation. We agree. DOD concurred with our fourth recommendation to update the roles and responsibilities guidance in DOD Directive 5160.05E to identify which organizations are responsible for conducting and participating in CBDP Enterprise risk assessments. According to the OASD (NCB) officials, the department is in the process of revising DOD Directive 5160.05E, and will include the risk assessment process in the roles and responsibilities section. If fully implemented, this action would address our recommendation. Finally, DOD concurred with our fifth recommendation to update the CBDP Enterprise’s portfolio planning process, to include when risk assessments will be conducted. OASD (NCB) officials noted that the risk assessment process was initially piloted in 2014 to determine its utility for informing CBDP Enterprise portfolio planning and guidance. They said that, moving forward, the CBDP Enterprise plans to conduct risk assessments annually to support portfolio planning and guidance. We believe this action, if fully implemented, would address our recommendation. The full text of DOD’s comments is reprinted in appendix VI. DOD also provided us with technical comments, which we incorporated, as appropriate. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology and Logistics; the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs; the Deputy Assistant Secretary of Defense for Chemical and Biological Defense; the Chairman of the Joint Chiefs of Staff; the Secretary of the Army; and the Director, Office of Management and Budget. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-9971 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. The Chemical and Biological Defense Program (CBDP) Enterprise comprises 26 organizations from across the Department of Defense (DOD) that determine warfighter requirements, provide science and technology expertise, conduct research and development and test and evaluation on capabilities needed to protect the warfighter, conduct program integration, and provide oversight. These key organizations include the following: Secretary of the Army Deputy Under Secretary of the Army Assistant Secretary of the Army for Acquisition, Logistics and Joint Program Executive Office for Chemical and Biological Defense Deputy Under Secretary of the Army for Test and Evaluation U.S. Army Chief of Staff Vice Chief of Staff of the Army U.S. Army Test and Evaluation Command West Desert Test Center Office of the U.S. Army Deputy Chief of Staff, G-8 Program Analysis and Integration Office U.S. Army Materiel Command U.S. Army Research, Development, and Engineering Command Edgewood Chemical Biological Center U.S. Army Medical Command U.S. Army Medical Research and Materiel Command U.S. Army Medical Research Institute of Chemical Defense U.S. Army Medical Research Institute of Infectious Chairman, Joint Chiefs of Staff Director, Force Structure, Resources, and Assessment Directorate Joint Requirements Office for Chemical, Biological, Radiological, and Nuclear Defense Office of the Under Secretary of Defense for Acquisition, Technology and Logistics Office of the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs Office of the Deputy Assistant Secretary of Defense for Defense Threat and Reduction Agency Joint Science and Technology Office for Chemical and Biological Defense In addition, according to officials from the Office of the Deputy Assistant Secretary of Defense for Chemical and Biological Defense, the Department of the Navy, the Department of the Air Force, the National Guard Bureau, and combatant commands also have key roles in the Chemical and Biological Defense Program. To determine the extent to which the Chemical and Biological Defense Program (CBDP) Enterprise has achieved its goal to identify required infrastructure capabilities to address current and emerging chemical and biological threats, we reviewed the Program Analysis and Integration Office’s (PAIO) 2008 study, Chemical and Biological Defense Program’s Non-Medical Physical Infrastructure Capabilities Assessment, which assessed the physical infrastructure capabilities of the CBDP Enterprise to support the CBDP mission. The study was requested by the Special Assistant, Chemical and Biological Defense and Chemical Demilitarization Programs, and it resulted in four recommendations that the CBDP Enterprise take to address its infrastructure. Specifically, we analyzed PAIO’s 2008 recommendation that the CBDP Enterprise identify its required infrastructure capabilities, part of its core capabilities, and compared them to the actions taken by the CBDP Enterprise since then through January 2015. We reviewed the recommendations with officials from the Office of the Deputy Assistant Secretary of Defense for Chemical and Biological Defense (ODASD ) and determined that the office recognized the 2008 recommendations to be valid and confirmed that the CBDP Enterprise recognizes the importance and necessity of addressing them. The CBDP Enterprise is using the recommendations as criteria in its efforts to address its research and development and test and evaluation intellectual and physical infrastructure. We conducted site visits to the CBDP Enterprise’s four primary research and development and test and evaluation facilities: Edgewood Chemical Biological Center (Edgewood) at Aberdeen Proving Ground, Maryland; U.S. Army Medical Research Institute of Chemical Defense (USAMRICD) at Aberdeen Proving Ground; U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) on the National Interagency Biodefense Campus at Fort Detrick, Maryland; and West Desert Test Center (West Desert) at Dugway Proving Ground, Utah. We included the four primary facilities in our review because they conduct the majority of the research and development and test and evaluation activities for the program. By including all of the primary facilities, we are gaining information across the CBDP Enterprise. However, this information is not generalizable to all facilities that may be used by the program to implement its mission. We developed and administered a questionnaire to these facilities, based on the 2012 Chemical and Biological Defense Program (CBDP) Strategic Plan and our objectives, to collect information about the knowledge and skill capabilities of its personnel and physical infrastructure capabilities of each of the facilities, including any changes and challenges to the CBDP Enterprise’s infrastructure, and any actions they have taken to identify required infrastructure capabilities (See app. III for additional information on these facilities.) We pretested our questionnaire with officials from ODASD (CBD) and the following CBDP Enterprise organizations: Edgewood, PAIO, Joint Science and Technology Office, and the Office of the Deputy Under Secretary of the Army for Test and Evaluation. The pretest was intended to solicit feedback on whether our questionnaire (1) would provide answers to the engagement’s objectives, (2) was written in a way that would be familiar to leadership officials of the primary research and development and test and evaluation facilities receiving them, and (3) should include additional questions to gain information about the CBDP Enterprise’s infrastructure. We incorporated the feedback, as appropriate, into our final questionnaire sent to the primary research and development and test and evaluation facilities. We interviewed leadership officials of these facilities about their written responses to our questionnaire. During our site visits to the four primary research and development and test and evaluation facilities, we toured the facilities and new buildings under construction to gain an understanding of how the infrastructure supports their missions. We also obtained information from officials from other CBDP Enterprise organizations that have responsibilities to the program, such as ODASD (CBD), the Joint Science and Technology Office, and PAIO, on their actions to identify required infrastructure capabilities and the CBDP Enterprise’s progress. We reviewed their plans and presentation to identify required infrastructure capabilities and interviewed them to discuss the plans. Finally, we compared key practices on the implementation of organizational transformation, such as the importance of establishing a dedicated authority responsible for day-to-day management for an organization’s change initiatives with the necessary authority and resources to set priorities, make timely decisions, and move quickly to implement top leadership’s decisions regarding organizational transformation, and a timeline and milestones to successfully implement organizational change, with actions the CBDP Enterprise has taken to implement its goal to identify required infrastructure capabilities needed to address current and emerging chemical and biological threats. We used these criteria from our work to analyze whether the CBDP Enterprise followed key implementation steps to successfully transform the way the CBDP Enterprise addresses its infrastructure goals. To determine the extent to which the Department of Defense’s (DOD) CBDP Enterprise has identified, addressed, and managed potential fragmentation, overlap, and duplication in its chemical and biological defense infrastructure, we reviewed CBDP guidance and policies on the program and related testing facility guidance; a study in 2011 on infrastructure needs to support medical countermeasures; and a 2014 PAIO infrastructure implementation plan to support the CBDP Enterprise’s efforts to avoid duplication. We reviewed the information to determine how the CBDP Enterprise identifies, addresses, and manages potential fragmentation, overlap, and duplication. We reviewed DOD Directive 5134.08 on the responsibilities of the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs and DOD Directive 3200.11 on the responsibilities of the Major Range and Test Facility Bases. The directives outline policy goals, such as using existing DOD and other federal agencies’ facilities and certain oversight activities aimed at avoiding unnecessary duplication. We did not conduct an independent assessment of potential fragmentation, overlap, and duplication within the CBDP Enterprise. We developed and administered a questionnaire to CBDP’s four primary research and development and test and evaluation facilities discussed above—Edgewood, USAMRICD, USAMRIID, and West Desert—based on our annual report to Congress on fragmentation, overlap, and duplication to identify any additional policies on duplication and understand their processes or actions to identify, address, and manage fragmentation, overlap, and duplication. responses to the questionnaire, we compared their processes and actions to DOD guidance to determine the extent to which the CBDP Enterprise reported that it avoided duplication and identified, addressed, and managed potential infrastructure duplication. In addition, we analyzed information about the facilities’ missions and infrastructure. We interviewed research and development facility officials about their infrastructure studies and the steps that they had taken to identify, address, or manage fragmentation, overlap, and duplication. We analyzed the studies, conducted for the U.S. Army Assistant Chief of Staff for Facilities, Planning and Programming Division, that identified potential infrastructure duplication and that were used to make infrastructure decisions about USAMRIID’s new facility. In addition, we reviewed the Competency Management Initiative program developed by the U.S. Army Medical Research and Materiel Command to identify knowledge and skill capabilities and potential duplication, among other factors, within the command, to include USAMRIID and USAMRICD. See GAO, 2014 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits, GAO-14-343SP (Washington, D.C.: Apr. 8, 2014). We reviewed the plan and studies of the Joint Science and Technology Office and the Army’s PAIO to identify required knowledge and skill capabilities and physical infrastructure capabilities, to include identifying potential duplication. We analyzed information about the missions and infrastructure of each CBDP primary research and development and test and evaluation facility to understand their role within the CBDP Enterprise. Based on the information from our questionnaire, we collected information from West Desert and Edgewood on their swatch testing infrastructure capabilities, infrastructure utilization, competitors, and customers. In addition, we interviewed research and development facility officials about the steps they have taken to identify, address, or manage fragmentation, overlap, and duplication. We did not collect information about the research and development and test and evaluation projects conducted at the facilities; therefore, we were unable to determine whether similar infrastructure capabilities at the facilities were overlapping or duplicative or used for different purposes. To determine the extent to which the CBDP Enterprise has used threat data and plans to use threat data and the results of risk assessments to support future investment planning in research and development for chemical and biological threats, we received a threat briefing from the Defense Intelligence Agency and the U.S. Army’s National Ground Intelligence Center, similar to the annual threat data received by the CBDP Enterprise, to understand the type of threat data on chemical and biological threats. We analyzed DOD Directive 5160.5E8 to determine which offices are responsible for conducting and participating in the CBDP Enterprise’s risk assessments. We reviewed the standards for internal control in the federal government for use of risk assessment and written procedures and compared them to any actions taken by the Joint Requirements Office and ODASD (CBD) to ensure the guidance and process are being followed. We interviewed officials from the Joint Requirements Office and ODASD (CBD) about who is responsible for conducting risk assessments and about how they used the risk assessment framework, which was introduced in the 2001 Quadrennial Defense Review Report, to conduct their risk assessment.the program’s annual portfolio planning process described in its 2012 CBDP Business Plan to understand the role of risk assessment in the CBDP Enterprise’s planning process. We compared internal control standards on written procedures to those used by the CBDP Enterprise to conduct its risk assessments. We obtained information on the operational, force management, and institutional risk assessments conducted by the Joint Requirements Office and ODASD (CBD) to understand the process used to conduct the CBDP Enterprise’s risk assessments. We interviewed officials from ODASD (CBD), which develops CBDP Enterprise-wide guidance to ensure strategic goals are achieved, to determine how threat data and the results of risk assessments are used—or will be used in the future—to support investment planning in research and development. We obtained relevant documentation and interviewed officials from the following organizations: Office of the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs Office of the Deputy Assistant Secretary of Defense for Chemical and Biological Defense (ODASD ) Office of the Assistant Secretary of Defense for Health Affairs Joint Chiefs of Staff Force Structure, Resources, and Assessment Directorate (J-8) Joint Requirements Office for Chemical, Biological, Radiological, and Nuclear Defense Defense Threat Reduction Agency Joint Science and Technology Office for Chemical and Biological Defense Defense Intelligence Agency U.S. Army Office of the Assistant Secretary of the Army for Acquisition, Logistics, and Technology Joint Program Executive Office for Chemical and Biological Defense Office of the U.S. Army Deputy Chief of Staff, G-8 Program Analysis and Integration Office (PAIO) Office of the Deputy Under Secretary of the Army for Test and U.S. Army Medical Research and Materiel Command U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID), National Interagency Biodefense Campus, Fort Detrick, Maryland U.S. Army Medical Research Institute of Chemical Defense (USAMRICD), Aberdeen Proving Ground, Maryland U.S. Army Materiel Command U.S. Army Research, Development and Engineering Command Edgewood Chemical Biological Center (Edgewood), Aberdeen U.S. Army Test and Evaluation Command West Desert Test Center (West Desert), Dugway Proving Ground, U.S. Army Intelligence and Security Command National Ground Intelligence Center National Interagency Confederation for Biological Research, National Interagency Biodefense Campus, Fort Detrick, Maryland We conducted this performance audit from January 2014 to June 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Chemical and Biological Defense Program (CBDP) Enterprise’s research and development and test and evaluation infrastructure is a key component in defending the nation against chemical and biological threats. For example, prior to deploying the MV Cape Ray to the Mediterranean Sea to demilitarize chemical weapons from Syria, the U.S. Army Medical Research Institute of Chemical Defense (USAMRICD) provided training to its medical staff, inspected the ship, and evaluated the medical preparedness of the mission. In July 2014, the United States began using equipment and personnel expertise from the U.S. Army’s Edgewood Chemical Biological Center (Edgewood), according to Edgewood officials, to neutralize chemical weapons materials from Syria as shown in this video. In another example, according to a U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) official, USAMRIID is supporting the development of multiple products against Ebola, including the experimental therapeutic drug ZMapp, which was provided to the American health care workers infected with the Ebola virus during the outbreak in West Africa in 2014. The CBDP Enterprise’s primary research and development and test and evaluation facilities have different missions, but serve the same military population and engage in similar activities to protect the warfighter from chemical and biological threats. While these facilities support the CBDP Enterprise in carrying out its mission, they are owned and operated by the U.S. Army. Edgewood’s mission is to be the nation’s provider of innovative solutions to countering weapons of mass destruction. Edgewood is located on Aberdeen Proving Ground, Maryland. Edgewood aligns with the CBDP Enterprise by “enabling the warfighter to deter, prevent, protect against, mitigate, respond to, and recover from chemical, biological, radiological, and nuclear threats and effects as part of a layered, integrated defense.” To do this, Edgewood’s core areas of work include chemistry and biological sciences; science and technology for emerging threats; chemical, biological, radiological, nuclear, and high-yield explosives analysis and testing; chemical and biological agent handling and surety; and chemical and biological munitions and field operations. Edgewood also conducts training of civilians and military personnel to respond to chemical and biological threats, cosponsoring some training with USAMRICD. In fiscal year 2013, 40.8 percent of Edgewood’s funding came from the CBDP Enterprise, and the remainder came from the Army (15.5 percent), non-CBDP Department of Defense (DOD) organizations (35.7 percent), federal agencies (4.1 percent), and nonfederal agencies (3.9 percent). As of October 2014, Edgewood had a staff of 1,421 whose work is focused on nonmedical materiel solutions to chemical and biological threats. Since 2008, Edgewood has completed projects intended to more safely perform the research and development required to address current and emerging chemical and biological threats. Changes at the facility’s Advanced Chemical Laboratory include the addition of 10,000 square feet of state-of-the-art laboratories for safely handing emerging agents, including materials with no known medical countermeasures. According to Edgewood officials, planned changes at its Advanced Threat Defense Facility are expected to facilitate expansion of emerging threat bench- scale experiments to large-scale evaluations to enable enhanced research capabilities and to include unique infrastructure capabilities to address the challenges of emerging chemical threats from vapors, solids, liquids, and aerosols. According to Edgewood officials, the biggest challenge for the future is sustaining core intellectual and physical infrastructure in a time of budget austerity. Second, these officials stated that a lack of a funding mechanism for sustainment of the facility is a challenge. The Program Analysis and Integration Office (PAIO) determined that the cost of sustainment is about $26.4 million for fiscal year 2015. There is a plan to fund sustainment of the chemical and biological infrastructure to support the CBDP mission in the Fiscal Year 2015–2019 Program Objective Memorandum; however, as of January 2015, there was no agreement within the CBDP Enterprise to support the primary research and development facilities in this way. Third, officials told us that Edgewood is maintaining 28 abandoned buildings on its campus. Figure 4 shows an example of an abandoned facility at Edgewood. Building 3222, now over 70 years old, was a medical research laboratory with about 33,000 square feet and was built in 1944. Figure 5 shows another example of an abandoned facility at Edgewood. Building 3300 was a chemistry laboratory used to develop and evaluate decontamination technology to mitigate chemical and biological threats. This facility has about 44,350 square feet and was built in 1966. According to Edgewood officials, it will cost over $74 million to demolish all 28 buildings, which is equivalent to about 1 year’s facilities sustainment and support costs for the CBDP Enterprise’s three primary research and development facilities put together. In addition, according to Edgewood officials, to maintain one of its most expensive buildings until it is demolished is estimated to cost about $600,000 a year. Officials said that the ability to maintain and expand their intellectual infrastructure also is strained in the current fiscal environment. Currently, Edgewood plans to maintain these facilities until funding becomes available to demolish the buildings. USAMRICD’s mission is to discover and develop medical products and knowledge solutions against chemical and biochemical threats by means of research, education and training, and consultation. USAMRICD is located on Aberdeen Proving Ground, Maryland. Its core areas of work include analytics, which includes diagnostics, forensics, and the Absorption, Distribution, Metabolism, Excretion, Toxicology Center of Excellence to support drug development; agent mitigation, which includes personnel decontamination and bioscavenger enzymes to neutralize chemical warfare agents; toxicant countermeasures, which includes countermeasures against vesicants, metabolic poisons, and pulmonary toxicants; nerve agent countermeasures, and toxin countermeasures. USAMRICD develops educational tools and conducts training courses for military and civilian personnel, with emphasis on medical care of chemical causalities. USAMRICD’s campus consists of 15 buildings and about 173,000 square feet of laboratories and support areas. In fiscal year 2013, about 61 percent of USAMRICD’s funding came from the CBDP Enterprise, with about 15 percent coming from non-CBDP DOD organizations, and about 24 percent coming from non-DOD federal organizations. As of July 2014, USAMRICD had a staff of 362 personnel supporting its work to develop medical chemical defenses for the warfighter. According to USAMRICD officials, there have been no major upgrades or additions to the current infrastructure since 2008 due to the construction of a new building. USAMRICD officials said they expect to begin moving into the facility in 2015. According to USAMRICD officials, the laboratory and research support areas of the facility will consist of about 250,000 square feet across four buildings when the new facility is complete. The entire new facility is about 526,000 square feet and is on track to be designated a Leadership in Energy and Environmental Design facility. Figure 6 shows USAMRICD’s new headquarters and laboratory facility. A Leadership in Energy and Environmental Design program promotes “green” building design, green construction practices, and evaluation of the whole building’s lifetime environmental performance. USAMRIID’s mission is to provide leading-edge medical capabilities to deter and defend against current and emerging biological threats. USAMRIID is located on the National Interagency Biodefense Campus at Fort Detrick, Maryland. After the terrorist attacks of September 11, 2001, additional funding allowed USAMRIID to increase its workforce to enhance its existing mission to address biological threats, to include biological threat characterization, enhanced studies of disease, and the development of medical countermeasures. Its core areas of work include preparing for uncertainty; research, development, test, and evaluation of medical countermeasures; rapidly identifying biological agents; training and educating the force; and providing expertise in medical biological defense. For example, USAMRIID also conducts field training for operational forces in areas such as threat identification and diagnostic methods. Figure 7 shows an example of a USAMRIID field training exercise. USAMRIID’s campus consists of 20 buildings and 582,369 square feet of laboratory and support space, with 134,469 square feet of Biosafety Level-2 (BSL-2), BSL-3, and BSL-4 laboratory space. According to USAMRIID officials, USAMRIID is the only DOD facility with BSL-4 containment laboratories. In addition, USAMRIID officials stated that about 80 percent of USAMRIID’s work is medical countermeasures research and development. Figure 8 shows USAMRIID staff in a BSL-4 containment laboratory. In 2012, the Office of the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs (OASD ) assigned USAMRIID the responsibility of performing BSL-3 and BSL-4 developmental testing and evaluation of medical countermeasures. As a result, USAMRIID made adjustments to the facility’s laboratory infrastructure and retained key subject-matter experts required to perform studies under the Good Laboratory Practices system promulgated by the Food and Drug Administration. In fiscal year 2013, about 50 percent of USAMRIID’s funding came from the CBDP Enterprise, with about 38 percent coming from non-CBDP DOD organizations, and the remaining 12 percent from non-DOD federal agencies and non-federal agencies. As of July 2014, USAMRIID maintained a staff of 841 personnel to support its work in biological defense research. USAMRIID is constructing a new headquarters and laboratory building, and officials said they expect to begin moving into the building in 2017. According to USAMRIID officials, the new facility will provide additional laboratory space and a new laboratory design to improve workflow and productivity, particularly when performing animal studies. The new facility will include several new capabilities, which may enhance understanding of the pathophysiology of animals and the effectiveness of medical countermeasures to address biological threats. In response to our questionnaire, USAMRIID officials told us that they are concerned about a potential intellectual infrastructure gap in supporting medical countermeasures test and evaluation, a new responsibility as of 2013. The new mission will require USAMRIID personnel to meet additional standards for conducting research and testing. In addition, USAMRIID officials stated that it will be a challenge in sustaining their new facility. USAMRIID officials said that it would be helpful if the CBDP Enterprise provided stable, sustainment funding in a way similar to the funding received for the test and evaluation facilities. PAIO estimated that the cost of sustainment and other support activities at USAMRIID is about $32.7 million in fiscal year 2015. Currently, the research and development facilities receive funds to sustain the facilities through individual research and development projects awarded by ODASD (CBD) through the Joint Science and Technology Office. West Desert Test Center (West Desert) at Dugway Proving Ground, Utah, has a mission to safely test warfighters’ equipment to high standards within cost and schedule. West Desert enables the delivery of reliable defense products to the warfighter through rigorous developmental and operational testing from the test tube to the battlefield. Its core areas of work include chemical and biological laboratory, chamber, and field testing; dissemination and explosives; dispersion modeling; meteorology; data science; and test engineering and integration. Dugway Proving Ground is one of DOD’s major range and test facility bases. In fiscal year 2013, about 77 percent of West Desert’s work was conducted for the CBDP Enterprise, with the rest from other DOD organizations (15 percent), non-DOD federal government agencies (2 percent), and industry, academia, and international organizations (6 percent). In response to section 232 of the Bob Stump National Defense Authorization Act for Fiscal Year 2003, West Desert charges DOD customers for the costs that are directly related to testing. Therefore, West Desert receives annual funding through the Army and OASD (NCB) for facility sustainment. As of July 2015, West Desert had a staff of 518 personnel on a facility of about 1,252 square miles, including mountain terrain, mixed desert terrain, and salt flats. According to West Desert officials, some of the infrastructure capabilities added since 2008 include upgrades and improvements to their test grid and dynamic test chamber. Additionally, West Desert has two major ongoing efforts to align infrastructure with emerging chemical and biological threats. The Whole System Live Agent Testing Chamber allows full-system testing of biological detection equipment in a BSL-3 environment with controlled humidity and wind speed—a capability that does not exist elsewhere. The second capability, the Modular Chemical Chamber Test Capabilities, tests warfighter capabilities against emerging chemical threats. This testing capability will include the installation of Secondary Containment Modules to roll into and out of a large multipurpose chemical-warfare-agent-testing chamber in the West Desert’s Bushnell Materiel Testing Facility. The use of modular chambers allows for reconfiguration of the facility for upcoming tests while other testing is being conducted within the Bushnell Materiel Testing Facility. According to West Desert officials, this modular concept is expected to reduce test costs and timelines, while increasing test throughput and adding flexibility in meeting customer test requirements. As part of its future plans to ensure its infrastructure is aligned to address emerging threats, West Desert officials stated that upcoming test requirements for conventional agents are in place and that priorities for future capabilities will focus on the ability to rigorously test military systems against threats from nontraditional chemical agents and toxic industrial chemicals and materials. In addition, West Desert is constructing an addition to its Life Sciences Test Facility. This annex, to support testing of field and chamber samples and analysis of test data, among other uses, will include about 41,200 square feet, with about 16,200 square feet of BSL-2 and BSL-3 laboratories, including an aerosol chamber. According to West Desert officials, this facility will address a current shortfall in BSL-3 laboratory and chamber testing capacity. Figure 9 shows West Desert’s new Life Sciences Test Facility annex. West Desert officials identified potential gaps in West Desert’s physical infrastructure and knowledge and skill capabilities. West Desert plans to establish a nontraditional (chemical) agent staging facility to support the modular test chambers being installed in the Bushnell Material Testing Facility. According to West Desert officials, as of January 2015, the project had not been approved for funding through the Military Construction–Defense budget account. In addition, officials have identified gaps in subject-matter expertise in molecular biology, virology, chemical engineering, analytical chemistry, aerosol-dissemination technology, information technology, catalysis, and automation technology. According to West Desert officials, government compensation restrictions will likely preclude the hiring of full-time personnel in the areas of information technology and chemical engineering. The Joint Requirements Office developed a list of capabilities needed by military forces to defend against chemical and biological threats in an operational environment. As shown in figure 10, the 18 core capabilities are categorized into four areas: Sense, Shape, Shield, and Sustain. These four areas are described as follows: The “Sense” area is the capability to continually provide information about the chemical, biological, radiological, and nuclear (CBRN) situation at a time and place by detecting, identifying, and quantifying CBRN hazards in air or water, and on land, personnel, equipment, or facilities. This capability includes detecting, identifying, and quantifying those CBRN hazards in all physical states (solid, liquid, and gas). The “Shape” area provides the ability to characterize the CBRN hazard to the force commander and to develop a clear understanding of the current and predicted CBRN situation; to collect, query, and assimilate information from sensors, intelligence, and medical personnel in near-real time to inform personnel, among other actions and responsibilities. The “Shield” area capabilities provide protection to the force from chemical and biological threats by preventing or reducing individual and collective exposures, applying prophylaxis to prevent or mitigate negative physiological effects, and protecting critical equipment. The “Sustain” area capabilities allow forces to conduct decontamination and medical actions that enable the quick restoration of combat power, maintain or recover essential functions that are free from the effects of CBRN hazards, and facilitate the return to preincident operational capability as soon as possible. Since 1999, we have found potential fragmentation, overlap, and duplication of the federal government’s chemical and biological research and development laboratory facilities, but also we have found improved coordination among federal agencies developing biological countermeasures. In 1999 and 2000, prior to the anthrax attacks in the United States, we found ineffective coordination among the Department of Defense (DOD) and other federal agencies with chemical and biological programs that could result in potential gaps or overlap in research and development programs. In August 1999, we found that the formal and informal program coordination mechanisms that existed between four military and civilian nonmedical chemical and biological programs may not ensure that potential overlap, gaps, and opportunities for collaboration would be addressed. Specifically, we found that coordinating mechanisms between DOD’s Chemical and Biological Defense Program (CBDP), DOD’s Defense Advanced Research Projects Agency’s Biological Warfare Program, the Department of Energy’s Chemical and Biological Nonproliferation Program, and the Counterterror Technical Support Program lacked information on prioritized user needs, lacked validated chemical and biological defense equipment requirements, and lacked information on how these programs relate their research and development projects to needs. We concluded that information on user needs and defined requirements may allow coordination mechanisms to compare the specific goals and objectives of research and development projects to better assess whether overlaps, gaps, and opportunities for collaboration exist. We did not make recommendations in this report. In July 2014, we testified before the House Committee on Energy and Commerce Subcommittee on Oversight and Investigations on recent incidents at government high-containment laboratories and the need for strategic planning and oversight of high-containment laboratories. In September 2009, we found that there was no federal entity responsible for strategic planning and oversight of high-containment laboratories— those designed for handling dangerous pathogens and emerging infectious diseases—across the federal government. We concluded in September 2009 that without an entity responsible for oversight and visibility across the high-containment laboratories and a strategy for requirements for the laboratories, there was little assurance of having facilities with the right capacity to meet the nation’s needs. We made several recommendations to address these issues, including identifying a single entity charged with periodic government-wide strategic evaluation of high-containment laboratories, developing a mechanism for sharing lessons learned from reported laboratory accidents, and implementing a personnel reliability program for high-containment laboratories, among other recommendations. In our February 2013 report on high-containment laboratories, we made two recommendations—first, that periodic assessment of national biodefense research and development needs be conducted and, second, that the Executive Office of the President, Office of Science and Technology Policy, examine the need to establish national standards for high-containment laboratories. The Executive Office of the President, Office of Science and Technology Policy, concurred with our two recommendations. Regarding biosurveillance, in June 2010, we found that the federal government could benefit from a focal point that provides leadership to the interagency community developing this capability. Biosurveillance is the ability to provide early detection and situational awareness of potentially catastrophic biological events. Specifically, we found that the mission responsibilities and resources needed to develop a biosurveillance capability were dispersed across a number of federal agencies, creating the potential for overlap and duplication of effort. In addition, we found that there was no broad, integrated national strategy that encompassed all stakeholders with biodefense responsibilities to guide the prioritization and allocation of investment across the entire biodefense enterprise, among other responsibilities. We made two recommendations to the Homeland Security Council within the Executive Office of the President to (1) identify a focal point, which was implemented when an Interagency Policy Group was convened to complete a National Biosurveillance Strategy in 2012 and (2) develop a national biosurveillance strategy, which remains open until a mechanism to identify resource and investment needs, including investment priorities, is included in an implementation plan. See GAO, Public Health Preparedness: Developing and Acquiring Medical Countermeasures Against Chemical, Biological, Radiological, and Nuclear Agents, GAO-11-567T (Washington, D.C.: Apr. 13, 2011). Security. This organization is a decision-making body responsible for providing recommendations to the Secretary of Health and Human Services on coordination of medical countermeasures development against chemical and biological threats, among other responsibilities. Similarly, in May 2014, we found coordination of effort among federal agencies located on the National Interagency Biodefense Campus. The following textbox provides our observations on the program’s efforts at the National Interagency Biodefense Campus to collaborate with other federal agencies to reduce potential infrastructure fragmentation, overlap, and duplication. GAO Observations on the National Interagency Biodefense Campus The National Interagency Biodefense Campus at Fort Detrick, Maryland, was established in 2004. An official with the U.S. Army Medical Research and Materiel Command testified before the House Select Committee on Homeland Security in 2004 that the campus would share common infrastructure and supporting requirements, such as roadways, libraries, and regulatory and quality assurance responsibility. In addition, the official stated that the campus would minimize duplication of effort, technology, and infrastructure. During the course of our review, we found some examples of actions taken by the CBDP Enterprise’s primary research and development facility at the National Interagency Biodefense Campus to reduce the potential for duplication of physical and intellectual infrastructure. A set of studies on medical countermeasure test and evaluation facility requirements, conducted for the U.S. Army Assistant Chief of Staff for Facilities, Planning and Programming Division, determined, among other things, that there was sufficient capacity for holding animals in existing facilities that conduct research with animals. During planning for a new medical countermeasure test and evaluation facility, a decision was made that the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) on the National Interagency Biodefense Campus would cancel its own plans to construct this building, including an animal holding facility (vivarium). According to USAMRIID officials, the cancellation had an overall estimated cost savings of about $600 million. USAMRIID and the National Institute of Allergy and Infectious Diseases Integrated Research Facility plan to share Biosafety Level-3 (BSL-3) and BSL-4 imaging laboratories capabilities. USAMRIID officials said that this reduces the need for each facility to have a BSL-3 and BSL-4 imaging laboratory. The National Interagency Confederation for Biological Research, a governance structure for the National Interagency Biodefense Campus, encourages intellectual collaboration in efforts related to research of biological pathogens across agency boundaries, such as collaborative award programs and annual scientific forums. We conducted a number of reviews since 1999 on the efforts of federal agencies to reduce potential fragmentation, overlap, and duplication through coordination of their efforts to manage chemical and biological programs. We found improved coordination that may reduce potential fragmentation, overlap, and duplication of research and development of medical countermeasures. In addition to the contact named above, GAO staff who made significant contributions to this report include Mark A. Pross, Assistant Director; Richard Burkard; Russ Burnett; Jennifer Cheung; Rajiv D’Cruz; Karen Doran; Edward George; Mary Catherine Hult; Mae Jones; Amie Lesser; Elizabeth Morris; Steven Putansu; Sushil Sharma; Sarah Veale; and Michael Willems. High-Containment Laboratories: Recent Incidents of Biosafety Lapses. GAO-14-785T. Washington, D.C.: July 16, 2014. Biological Defense: DOD Has Strengthened Coordination on Medical Countermeasures but Can Improve Its Process for Threat Prioritization. GAO-14-442. Washington, D.C.: May 15, 2014. High-Containment Laboratories: Assessment of the Nation’s Need Is Missing. GAO-13-466R. Washington, D.C.: February 25, 2013. Public Health Preparedness: Developing and Acquiring Medical Countermeasures Against Chemical, Biological, Radiological, and Nuclear Agents. GAO-11-567T. Washington, D.C.: April 13, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. Biosurveillance: Efforts to Develop a National Biosurveillance Capability Need a National Strategy and a Designated Leader. GAO-10-645. Washington, D.C.: June 30, 2010. High-Containment Laboratories: National Strategy for Oversight Is Needed. GAO-09-1045T. Washington, D.C.: September 22, 2009. High-Containment Laboratories: National Strategy for Oversight Is Needed. GAO-09-1036T. Washington, D.C.: September 22, 2009. High-Containment Laboratories: National Strategy for Oversight Is Needed. GAO-09-574. Washington, D.C.: September 21, 2009. Chemical and Biological Defense: Observations on DOD’s Risk Assessment of Defense Capabilities. GAO-03-137T. Washington, D.C.: October 1, 2002. Chemical and Biological Defense: Coordination of Nonmedical Chemical and Biological R&D Programs. GAO/NSIAD-99-160. Washington, D.C.: August 16, 1999.
The United States faces current and emerging chemical and biological threats, and defenses against these threats enable DOD to protect the force, preclude strategic gains by adversaries, and reduce risk to U.S. interests. GAO was asked to review DOD efforts to manage its chemical and biological defense infrastructure capabilities. This report examines the extent to which the CBDP Enterprise has: (1) achieved its goal to identify required infrastructure capabilities to address current and emerging chemical and biological threats; (2) identified, addressed, and managed potential fragmentation, overlap, and duplication in its chemical and biological defense infrastructure; and (3) used and plans to use threat data and the results of risk assessments to support its investment planning for chemical and biological defense. GAO analyzed CBDP infrastructure policies, plans, and studies from organizations across the CBDP Enterprise from fiscal years 2008 through 2014. A key component of the 26 Department of Defense (DOD) organizations that constitute the Chemical and Biological Defense Program (CBDP) Enterprise is the chemical and biological defense research and development and test and evaluation infrastructure. After nearly 7 years, the CBDP Enterprise has not fully achieved its goal to identify required infrastructure capabilities. The Joint Chemical, Biological, Radiological, and Nuclear Defense Program Analysis and Integration Office (PAIO), CBDP's analytical arm, recommended in 2008 that the CBDP Enterprise identify required infrastructure capabilities, such as laboratories to research chemical and biological agents, to ensure alignment of the infrastructure to its mission. CBDP Enterprise officials recognize the importance, validity, and necessity of addressing the 2008 recommendation. The CBDP Enterprise has made limited progress in achieving this infrastructure goal because CBDP Enterprise officials told GAO that they were focused on higher priorities and had no CBDP Enterprise-wide impetus to address the infrastructure recommendations. The Office of the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs previously identified the need for an entity that has the responsibility and authority needed to ensure achievement of this goal, but DOD has not designated such an entity. By identifying and designating an entity with the responsibility and authority to lead infrastructure transformation, the CBDP Enterprise would be better positioned to achieve this goal. The CBDP Enterprise has taken some actions at its laboratories to identify duplication in its chemical and biological defense infrastructure. DOD directives outline goals, such as to avoid duplication by using existing DOD and other federal agencies' facilities. As part of an ongoing study to identify required infrastructure, in July 2015 PAIO plans to inventory and analyze CBDP Enterprise infrastructure for potential duplication. However, study officials stated that they do not plan to identify, request, or consider information about infrastructure capabilities from existing studies of other federal agencies, such as the Department of Homeland Security, because their office does not have the authority or resources to require such information. By considering existing information, which would not necessarily require new authority, PAIO will have more information about existing infrastructure inventory across the federal government, such as its capability and potential availability for use. The CBDP Enterprise used threat data and plans to use threat data and the results from risk assessments piloted in 2014 to support its future portfolio planning process to prioritize research and development investment. However, the CBDP Enterprise has not updated its guidance and planning process to fully institutionalize the use of risk assessments. Federal standards for internal control state that agencies should have written procedures to better ensure leadership directives are implemented. According to CBDP Enterprise officials, while updating the guidance would be beneficial, they had not committed to updating such guidance or established a time frame for doing so. By updating its guidance to fully institutionalize the use of risk assessments, the CBDP Enterprise would be better positioned to prioritize future research and development investments. GAO recommends, among other things, that DOD (1) designate an entity to lead the effort to identify required infrastructure; (2) identify, request, and consider any information from chemical and biological infrastructure studies of other federal agencies to avoid potential duplication; and (3) update the CBDP Enterprise's guidance and planning process to fully institutionalize the use of risk assessments. DOD concurred with all five of GAO's recommendations and discussed actions it plans to take.
You are an expert at summarizing long articles. Proceed to summarize the following text: Overview of the disaster recovery process. According to the Department of Homeland Security’s National Response Framework, once immediate lifesaving activities are complete after a major disaster, the focus shifts to assisting individuals, households, critical infrastructure, and businesses in meeting basic needs and returning to self-sufficiency. Even as the immediate imperatives for response to an incident are being addressed, the need to begin recovery operations emerges. The emphasis on response gradually gives way to recovery operations. During the recovery phase, actions are taken to help individuals, communities, and the nation return to normal. The National Response Framework characterizes disaster recovery as having two phases: short-term recovery and long-term recovery. Short-term recovery is immediate and an extension of the response phase in which basic services and functions are restored. It includes actions such as providing essential public health and safety services, restoring interrupted utility and other essential services, reestablishing transportation routes, and providing food and shelter for those displaced by the incident. Although called short-term, some of these activities may last for weeks. Long-term recovery may involve some of the same actions as short- term recovery but may continue for a number of months or years, depending on the severity and extent of the damage sustained. It involves restoring both the individual and the community, including the complete redevelopment of damaged areas. Some examples of long- term recovery include providing permanent disaster-resistant housing units to replace those destroyed, initiating a low-interest façade loan program for the portion of the downtown area that sustained damage from the disaster, and initiating a buyout of flood-prone properties and designating them community open space. As the President has previously noted, state and local leaders have the primary role in planning for recovery efforts. Under the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act), the federal government is authorized to provide assistance to those jurisdictions in carrying out their responsibilities to alleviate suffering and damage which results from disasters. In general under the Stafford Act, the federal role is to assist state and local governments—which have the primary role with regard to recovery efforts. In major disasters where the event overwhelms the capacity of state and local governments, the federal government can offer more assistance to supplement the efforts and available resources of states, local governments, and disaster relief organizations in alleviating the damage, loss, hardship, or suffering caused by the disaster. After a major disaster, the federal government may provide unemployment assistance; food coupons to low-income households; and repair, restoration, and replacement of certain damaged facilities, among other things. For example, the city of New Orleans estimated this April that the federal government will provide over $15 billion for the rebuilding of the city through numerous disaster assistance programs, including FEMA’s Public Assistance Grant Program and Community Disaster Loan program, and the Department of Housing and Urban Development’s Community Development Block Grants program. Nevertheless, state and local governments have the main responsibility of applying for, receiving, and implementing federal assistance. Further, they make decisions about what priorities and projects the community will undertake for recovery. Impact of Hurricanes Gustav and Ike. Hurricanes Gustav and Ike made landfall in the Gulf Coast this month, resulting in federal major disaster declarations for 95 counties in Texas, Louisiana, and Alabama (see fig. 1). Gustav made landfall near Cocodrie, Louisiana, as a category 2 hurricane on September 1, 2008. Ike made landfall as a category 2 hurricane near Galveston, Texas, on September 13, 2008. These hurricanes have caused widespread damage to affected Gulf Coast states. For example, the state of Louisiana has confirmed 10 Gustav-related deaths. Recent press accounts have attributed the death of about 50 people in the United States to Hurricane Ike. Further, Hurricanes Gustav and Ike have significantly disrupted utility service as well as oil and natural gas production in the Gulf Coast. Specifically, Gustav caused power outages for over 1.1 million Louisiana and Mississippi customers, while over 2.2 million customers in Texas lost power after Ike made landfall. The hurricanes have also affected oil and natural gas production in the Gulf Coast. Most of the refineries in Gustav’s path were affected, resulting in a 100 percent reduction in crude oil production. Almost all refineries in Ike’s path shut down, halting crude oil production in the area by 99.9 percent. Over half of the 39 major natural gas processing plants in the affected areas have ceased operations as a result of Hurricanes Gustav and Ike, reducing the total operating capacity of the region by 65 percent. Given the recent landfall of these hurricanes, comprehensive damage assessments from government agencies were not available at the time of this report’s issuance. Impact of the 2008 Midwest Floods. Heavy rainfall across much of the northern half of the Great Plains during early June 2008 resulted in river flooding. This flooding became increasingly severe as heavy rain continued into the second week of June and rising rivers threatened dams and levees and submerged large areas of farmland along with many cities and towns. As a result, the President issued federal major disaster declarations for counties in seven states: Illinois, Indiana, Iowa, Missouri, Minnesota, Nebraska, and Wisconsin (see fig. 2). The flooding resulted in widespread damage for some communities in these states. For example, the rivers in Cedar Rapids, Iowa, crested over 30 feet, flooding 10 square miles of the city and displacing over 18,000 people and several city facilitates, including the city hall, police department, and fire station. The flooding also affected agricultural production in these states. For example, the state of Indiana estimates the floods will result in a crop shortfall of $800 million in the coming year and $200 million in damaged farmlands. To identify insights from past disasters we interviewed officials involved in disaster recovery in the United States and Japan. Domestically, we met with officials from state and local governments affected by the selected disasters, as well as representatives of nongovernmental organizations involved in long-term recovery. In Japan, we met with officials from the government of Japan, Hyogo Prefecture, and the city of Kobe. In addition, we also interviewed over 40 experts—both domestic and international— on the subject of disaster recovery. We visited the key communities affected by five of the six disasters in our study to meet officials involved in the recovery effort and examine current conditions. While we did not visit communities affected by the Red River flood, we were able to gather the necessary information through interviews by telephone with key officials involved in the recovery as well as recovery experts knowledgeable about the disaster. Further, we obtained and reviewed legislation, ordinances, policies, and program documents that described steps taken to facilitate long-term recovery following each of our selected disasters. The scope of our work did not include independent evaluation or verification regarding the extent to which the communities’ recovery efforts were successful. We also drew on previous work we have conducted on recovery efforts in the aftermath of the 2005 Gulf Coast hurricanes. We have issued findings and recommendations on several aspects of the Gulf Coast recovery, including protecting federal disaster programs from fraud, waste, and abuse; providing tax incentives to assist recovery; and determining the role of the nonprofit sector in providing assistance to that region. See figure 3 for the locations of the six disasters that we selected for this review. We reviewed lessons from past disasters and collected information about the impact of Hurricanes Ike and Gustav and the 2008 Midwest floods from June 2007 through September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. After a major disaster, a recovery plan can provide state and local governments with a valuable tool to document and communicate recovery goals, decisions, and priorities. Such plans offer communities a roadmap as they begin the process of short- and long-term recovery. The process taken to develop these plans also allows state and local governments to involve the community in identifying recovery goals and priorities. After past disasters, the federal government has both funded and provided technical assistance on how to create such plans. In our review of recovery plans that state and local governments created after major disasters, we identified three key characteristics. Specifically, these plans (1) identified clear goals for recovery, (2) included detailed information to facilitate implementation, and (3) were established in a timely manner. A recovery plan containing clear goals can provide direction and specific objectives for communities to focus on and strive for. Clear recovery goals can also help state and local governments prioritize projects, allocate resources, and establish a basis for subsequent evaluations of the recovery progress. After the 1995 Kobe earthquake in Japan, the areas most hard-hit by the disaster—Hyogo prefecture and the city of Kobe—identified specific recovery goals to include in their plans. Among these were the goals of rebuilding all damaged housing units in 3 years, removing all temporary housing within 5 years, and completing physical recovery in 10 years. According to later evaluations of Kobe’s recovery conducted by the city and recovery experts, these goals were critical for helping to coordinate the wide range of participants involved in the recovery. In addition, it helped to inform the national government’s subsequent decisions for funding recovery projects in these areas. These goals also allowed the government to communicate its recovery progress with the public. Each month, information on progress made towards achieving these goals was provided to the public on-line and to the media at press conferences. This communication helped to inform the public about the government’s recovery progress on a periodic basis. Further, these goals provided a basis for assessing the recovery progress a few years after the earthquake. Both Hyogo and Kobe convened panels of international and domestic experts on disaster recovery as well as community members to assess the progress made on these targets and other recovery issues. These evaluations enabled policymakers to measure the region’s progress towards recovery, identify needed changes to existing policies, and learn lessons for future disasters. Similar efforts to inform the public about the government’s recovery progress are being taken in the wake of the 2005 Gulf Coast hurricanes. In February 2008, FEMA and the Federal Coordinator of Gulf Coast Recovery launched its Transparency Initiative. This web-based information sharing effort provides detailed information about selected buildings and types of projects in the Gulf Coast receiving Public Assistance grants. For example, the web site provides information on whether specific New Orleans schools are open or closed and how much federal funding is available for each school site. To do this, FEMA and Federal Coordinator staff pulled together information from state and locals as well as data on all Public Assistance grants for permanent infrastructure throughout the Gulf Coast. According to the Office of the Federal Coordinator, the initiative has been useful in providing information on federal funds available and the status of infrastructure projects in a clear and understandable way to the general public and a wide range of stakeholders. With the uncertainty that can exist after major disasters, the inclusion of detailed implementation information in recovery plans can help communities realize their recovery goals. Implementable recovery plans specify objectives and tasks, clarify roles and responsibilities, and identify potential funding sources. Approximately 3 months after the 1997 Red River flood, the city of Grand Forks approved a recovery plan with these characteristics that helped the city take action towards achieving recovery. First, the Grand Forks plan identified 5 broad recovery goals covering areas such as housing and community redevelopment, business redevelopment, and infrastructure rehabilitation. The plan details a number of supporting objectives and tasks to be implemented in order to achieve the stated goals. For example, one of the 5 goals included the plan was to clean up, repair, and rehabilitate the city’s infrastructure and restore public services to pre-flood conditions. The plan outlined 5 objectives to accomplish that goal, including repairing and rehabilitating the city’s water distribution and treatment facilities. Some of the tasks specified in the plan to achieve that objective are repairing pumping stations, fixing water meters, and completing a 2-mile limit drainage master plan. Additionally, the plan identified a target completion date for each task so that the city can better manage the mix of short- and long- term activities necessary to recover. Second, the Grand Forks recovery plan clearly identified roles and responsibilities associated with the specific tasks, which helped with achieving broader recovery goals. To do this, the plan identified which personnel—drawn from city, state, and federal agencies—would be needed to carry out each task. For example, the plan called for collaboration of staff from the city’s urban development and engineering/building inspection departments, FEMA, and the Army Corp of Engineers to create an inventory of substantially damaged buildings in the downtown area. By clarifying the roles and responsibilities of those who would be involved in accomplishing specific tasks, the plan provided detailed information to facilitate implementation. Third, the Grand Forks plan also identified funding sources for each recovery task. For example, to fund the task of cleaning up and repairing street lights (which would help achieve the objective of cleaning, repairing, and rehabilitating the city’s streets), the plan referenced sources from FEMA’s Public Assistance Grant Program, the state of North Dakota, and the city’s general revenue fund. The plan contained a detailed financing matrix, organized by the broader recovery goals identified in the body of the plan, which identified various funding sources for each task (see fig. 4). The matrix also included a target completion date for each task. A city evaluation of the recovery plan found that the process of specifying goals and identifying funding sources allowed the city to conceive and formulate projects in collaboration with the city council and representatives from state and local governments. This helped Grand Forks meet its recovery needs as well as adhere to federal and state disaster assistance funding laws and regulations. The recovery plans created by the Hyogo and Kobe governments after the 1995 earthquake also helped to facilitate the funding of recovery projects. It served as the basis of discussions with the national government regarding recovery funding by clearly communicating local goals and needs. Towards this end, Hyogo and Kobe submitted their recovery plans to a centralized recovery organization that included officials from several national agencies including the Ministry of Finance and the Ministry of Construction. Ministry staff worked with local officials to reconcile the needs identified in their recovery plans with national funding constraints and priorities. As a result of this process, local officials were able to adjust their recovery plans to reflect national budget and funding realities. Some state and local governments quickly completed recovery plans just a few short months after a major disaster. These plans helped to facilitate the ensuing recovery process by providing a clear framework early on. The regional governments affected by the Kobe earthquake promptly created recovery plans to help ensure that they could take advantage of central government funding as soon as possible. After the earthquake, there was a relatively short amount of time to submit proposals for the national budget in order to be considered for the coming year. Facing this deadline, local officials devised a two-phase strategy to develop a plan that could quickly identify broad recovery goals to provide a basis for budget requests to meet the national budget deadline. After that initial planning phase, the governments then collaborated with residents to develop detailed plans for specific communities. In the first phase, Kobe focused on creating a general plan to identify broad recovery goals, such as building quality housing, restoring transportation infrastructure, and building a safer city. This first plan was issued 2 months after the earthquake and contained 1,000 projects with a budget of $90 billion. It was designed to establish the framework for recovery actions and to provide the basis for obtaining central government funds. In the second phase, the city involved residents and local organizations, including businesses and community groups, to develop a more detailed plan for the recovery of specific neighborhoods. This second plan began 6 months after the earthquake. The two-phase planning process enabled Kobe and Hyogo to meet their tight national budget submission deadline while allowing additional time for communities to develop specific recovery strategies. Given the lead role that state and local governments play in disaster recovery, their ability to act effectively directly affects how well communities recover after a major disaster. There are different types of capacity that can be enhanced to facilitate the recovery process. One such capacity is the ability of state and local governments to make use of various kinds of disaster assistance. The federal government—along with other recovery stakeholders, such as nongovernmental organizations— plays a key supporting role by providing financial assistance through a range of programs to help affected jurisdictions recover after a major disaster. However, state and local governments may need certain capacities to effectively make use of this federal assistance, including having financial resources and technical know-how. More specifically, state and local governments are often required to match a portion of the federal disaster assistance they receive. Further, affected jurisdictions may also need additional technical assistance on how to correctly and effectively process applications and maintain required paperwork. Following Hurricanes Ike and Gustav and the Midwest floods earlier this year, building up these capacities may improve affected jurisdictions’ ability to navigate federal disaster programs. After a major disaster, state and local governments may not have adequate financial capacity to perform many short- and long-term recovery activities, such as continuing government operations and paying for rebuilding projects. The widespread destruction caused by major disasters can impose significant unbudgeted expenses while at the same time decimate the local tax base. Further, federal disaster programs often require state and local governments to match a portion of the assistance they receive, which may pose an additional financial burden. In the past, affected jurisdictions have used loans from a variety of sources including federal and state governments to enhance their local financial capacity. For example, the Stafford Act authorizes FEMA to administer the Community Disaster Loan program which can be used by local governments to provide essential postdisaster services. Additionally, affected localities have used special taxes to build local financial capacity after major disasters. Providing a loan to local governments is one way to build financial capacity after a disaster. Soon after the 1997 Red River flood, the state-owned Bank of North Dakota provided a line of credit totaling over $44 million to the city of Grand Forks. The city used this loan to meet FEMA matching requirements, provide cash flow for the city government to meet operating expenses, and fund recovery projects that commenced before the arrival of federal assistance. The city of New Orleans also sought state loans to help build financial capacity in the aftermath of the 2005 Gulf Coast hurricanes. The city is working with Louisiana to develop a construction fund to facilitate recovery projects. The fund would enable New Orleans to have more access to money to fund projects upfront and reduce the level of debt that the city would otherwise incur. Another way to augment local financial capacity is to raise revenue through temporary taxes that local governments can target according to their recovery needs. After the 1989 Loma Prieta earthquake, voters in Santa Cruz County took steps to provide additional financial capacity to affected localities. The county implemented a tax increment, called “Measure E,” about 1 year after the disaster, which increased the county sales tax by ½ cent for 6 years. The proceeds were targeted to damaged areas within the county based on an allocation approved by voters. Measure E generated approximately $12 million for the city of Santa Cruz, $15 million for the city of Watsonville, and $17 million for unincorporated areas of Santa Cruz County. According to officials from Watsonville and Santa Cruz, Measure E provided a critical source of extra funding for affected Santa Cruz County localities. For example, officials from Watsonville (whose general fund annual budget was about $17 million prior to the earthquake) used proceeds from Measure E to meet matching requirements for FEMA’s Public Assistance Grant Program. These officials also used Measure E to offset economic losses from the earthquake, as well as provide financing for various recovery projects, such as creating programs to repair damaged homes and hiring consultants that helped the community plan for long-term recovery. While raising local sales taxes may not be a feasible option for all communities, Santa Cruz officials recognized the willingness of county voters to support this strategy. Similarly, state and local governments in the Gulf Coast and Midwest states can look to develop strategies for increasing financial capacity in ways that are both practical and appropriate for their communities. State and local governments face the challenge of implementing the wide range of federal programs that provide assistance for recovery from major disasters. Some of these federal programs require a certain amount of technical know-how to navigate. For example, FEMA’s Public Assistance Grant Program has complicated paperwork requirements and multistage application processes that can place considerable demands on applicants. After the 2005 Gulf Coast hurricanes, FEMA and Mississippi state officials used federal funding to obtain an on-line accounting system that tracked and facilitated the sharing of operational documents, thereby reducing the burden on applicants of meeting Public Assistance Grant Program requirements. According to state and local officials, the state contracted with an accounting firm that worked hand-in-hand with applicants to regularly scan and transmit documentation on architectural and engineering estimates, contractor receipts, and related materials from this Web-based system. As a result, FEMA and the state had immediate access to key documents that helped them to make project approval decisions. Further, local officials reported that this information-sharing tool, along with contractor staff from an accounting firm, helped to relieve the documentation and resulting human capital burdens that state and local applicants of the Public Assistance Grant Program faced during project development. Business recovery is a key element of a community’s recovery after a major disaster. Small businesses are especially vulnerable to these events because they often lack resources to sustain physical losses and have little ability to adjust to market changes. Widespread failure of individual businesses may hinder a community’s recovery. Federal, state, and local governments have developed strategies to facilitate business recovery, including several targeted at small businesses. These strategies helped businesses adapt to postdisaster market conditions, helped reduce business relocation, and allowed businesses to borrow funds at lower interest rates than would have been otherwise available. Major disasters can change communities in ways that require businesses to adapt. For example, following Hurricane Andrew, large numbers of people left south Miami-Dade County. The closing of Homestead Air Force Base, which was permanently evacuated just hours before the hurricane struck, reduced the population of the area significantly. Moreover, the base closure removed families and individuals with reliable incomes and spending power. Following the departure of Air Force personnel and dependents, winter residents and retired people also left in great numbers, never to return. Today, the city of Homestead is an entirely different place as community demographics have changed dramatically. Businesses that did not adapt to this new reality did not survive. The extent to which business owners can recognize change and adapt to the postdisaster market for goods and services can help those firms attain long-term viability after a disaster. Recognizing this after the Northridge earthquake, Los Angeles officials assisted neighborhood businesses in adapting to short- and long-term changes, using a combination of federal, state, and local funds. The Northridge earthquake caused uneven damage throughout the Los Angeles area, leaving some neighborhoods largely intact while creating pockets of damaged, abandoned buildings. Businesses in these areas suffered physical damage and the loss of customers when area residents abandoned their homes. The Valley Economic Development Center (VEDC), a local non-profit, established an outreach and counseling program to provide direct technical assistance to affected businesses throughout the San Fernando Valley after the Northridge earthquake. With funding from the city of Los Angeles, the state of California, and the Small Business Administration, VEDC provided guidance on obtaining federal and local governmental financial assistance, as well as strategies for adjusting to changes in the business environment. Toward this end, VEDC staff went door-to-door in affected business districts, served as a clearinghouse for information on earthquake recovery, sponsored workshops, reached out to business owners, and collected detailed information about businesses. VEDC also hosted conferences that taught business owners how to strategically market goods and services given the changed demographics. Speakers at these conferences provided information about the economic and social impact of the earthquake. VEDC estimates that over 6,000 businesses were served by these efforts. Additionally, they found that these services helped saved almost 8,000 jobs in the San Fernando Valley. Continuing programs provided counseling and assistance with applying for financial assistance to hundreds of businesses for more than 5 years after the earthquake. The potential value of this type of technical assistance is illustrated by an example of a Northridge business that did not receive it. A well- established fish market outside of the San Fernando Valley reopened after the earthquake with the intention of resuming its formerly successful business of selling the same inventory that it sold before the disaster. However, as a result of the earthquake, the area’s customer base had changed significantly and the new population did not purchase the market’s merchandise. Despite spending his life savings to restore the business, the owner suffered considerable losses and eventually was forced to close the fish market after the lease expired. Since major disasters can bring significant change to business environments, communities may look for ways to help retain some existing businesses because widespread relocation can hinder recovery. In an effort to minimize relocations after the Red River flood, the city of Grand Forks created incentives to encourage businesses to remain in the community using funds from the Department of Housing and Urban Development’s Community Development Block Grant program and the Department of Commerce’s Economic Development Administration. Grand Forks developed a program that provided $1.75 million in loans to assist businesses that suffered physical damage in the flood. This program offered 15-year loans with no interest or payments required for the first 5 years of the loan. In addition, businesses which continued to operate within the city at the end of 3 years had 40 percent of the loan’s principal forgiven. A Grand Forks official said that over 70 percent of the businesses that received the loan stayed in the community for at least 3 years. This official also estimated that over 40 percent of the businesses would have closed without the loan program. The city of Santa Cruz also took steps to minimize the relocation of businesses from its downtown shopping district, which also helped to maintain a customer base for the community. Within weeks of the Loma Prieta earthquake, the city worked together with community groups to construct seven large aluminum and fabric pavilions where local businesses that suffered physical damage temporarily relocated. These pavilions, located in parking areas 1 block behind the main commercial area, were leased to businesses displaced by the earthquake. Over 40 retail stores, including bookstores, cafes, and hardware stores, operated out of the pavilions for up to 3 years while storefronts were rebuilt (see fig. 5). City officials stated that these pavilions help to mitigate the impact of the earthquake on small businesses by enabling them to continue operations and thereby maintain their customer base. In contrast, officials near Santa Cruz in the city of Watsonville did not create such temporary locations after the Loma Prieta earthquake, and as a result, businesses moved out of the downtown area to a newly completed shopping center on the outskirts of the city. With the relocation of these businesses, some consumers stopped shopping in remaining stores in the downtown area. A senior Watsonville official told us that these business relocations continue to hamper the recovery of the downtown district almost two decades after the earthquake. The federal government has used tax incentives to stimulate business recovery after major disasters. These incentives provide businesses with financial resources for recovery that may otherwise not be available. Certain tax incentives are open-ended, meaning that any individual or business that meets specified federal requirements may claim the tax incentives. States allocate other tax incentives to selected businesses, projects, or local governments and ensure allocations do not exceed limits set for each state. For those tax incentives where the states have primary allocation responsibility, an opportunity exists for states to allocate the incentives in a manner consistent with their communities’ recovery goals. Midwest and other states may find value in considering the experiences of communities recovering from past disasters when developing their own approach in how to allocate these incentives. The Congress created tax incentives after the 2005 hurricanes through the Gulf Opportunity Zone Act of 2005 (GO Zone Act) in part to promote business recovery. Following those hurricanes, affected state governments were responsible for allocating four tax incentives, including a $14.9 billion tax-exempt private activity bond authority to assist business recovery. These bonds allowed businesses to borrow funds at lower interest rates than would have otherwise been available because investors purchasing the bonds are not required to pay taxes on the interest they earn on the bonds. The Gulf Coast states exercising this authority are using the tax-exempt private activity bonds for a wide range of purposes to support different businesses, including manufacturing facilities, utilities, medical offices, mortgage companies, hotels, and retail facilities. Under the GO Zone Act, authorized states have established processes and selected which projects were to receive these bond allocations up to each state’s allocation authority limit. These states generally used a first-come, first-served basis for allocating the rights to issue tax-exempt private activity bonds under the GO Zone Act and did not consistently target the bond authority to assist recovery in the most damaged areas at the beginning of the program. Officials in Louisiana and Mississippi involved in allocating this authority acknowledged that the first-come, first-served approach made it difficult for applicants in some of the most damaged areas to make use of the bond provision immediately following the 2005 hurricanes. Counties and parishes in the most damaged coastal areas of Louisiana and Mississippi faced challenges dealing with the immediate aftermath of the hurricanes and could not focus on applying for this authority. Louisiana recently set aside a portion of its remaining allocation authority for the most damaged parishes. This July, legislation was introduced in Congress modeled after the GO Zone Act, which, among other tax incentives, would provide private activity bond allocation authority to certain Midwest states to help the victims of this year’s floods. Under the proposed legislation, similar to the GO Zone Act, affected states would also have the authority to allocate additional low-income housing tax credits for rental housing and issue tax credit bonds for temporary debt relief, among other provisions. The Gulf Coast states’ first-come, first-served allocation process meant, according to some officials we interviewed, that some projects that would have been viable without tax-exempt private activity bond financing received tax- exempt private activity bond allocations. Such allocations may not have fully supported the long-term recovery goals of that region. This may be particularly relevant to Midwest states given that the proposed legislation contains provisions related to tax-exempt private activity bonds similar to those authorized by the GO Zone Act of 2005. The influx of federal financial assistance available to victims after a major disaster provides increased opportunities for fraud, waste, and abuse. Disaster victims are at risk, as well as the public funds supporting government disaster programs. Specifically, many disaster victims hire contractors to repair or rebuild their homes using financial assistance from the government. Residents are potential targets for fraud by unscrupulous contractors. In addition, government programs are also vulnerable: the need to quickly provide assistance to disaster victims puts assistance programs at risk of fraudulent applicants trying to obtain benefits that they are not entitled to receive. We identified two actions that state and local governments can take after major disasters to combat fraud, waste, and abuse. Communities are often faced with the problem of contractor fraud after major disasters as large numbers of residents look to hire private firms to repair or rebuild their homes and businesses. For example, after Hurricane Andrew in 1992, over 7,000 homeowners filed formal complaints of contractor fraud with Miami-Dade County’s Construction Fraud Task Force from August 1993 through March 1995. An official from the Miami- Dade Office of the State Attorney reported that they successfully prosecuted more than 300 felony cases, over 290 misdemeanor cases, and resulting in the restitution of more than $2.6 million to homeowners by October 1996. Other complaints that were not criminal in nature resulted in substantial administrative fines and additional restitution. More recently, FEMA and Midwest states anticipate that fraud will also be a concern after this year’s floods and have issued warnings to residents about the need to be vigilant for potentially fraudulent contractors. To help address this issue, FEMA has issued tips and guidelines to the public about hiring contractors. To help protect its residents from contractor fraud after the Red River flood, the city of Grand Forks established a required credentialing program for contractors. This included a “one-stop shop” that served as a mandatory clearinghouse for any contractor who wanted to do business with recovering residents. The clearinghouse was staffed by representatives from a range of city and state offices, including the North Dakota Secretary of State, the North Dakota Attorney General, the North Dakota Workers Compensation Bureau, the North Dakota Bureau of Criminal Investigations, and the Grand Forks Department of Administration and Licensing. These staff carried out a variety of functions, including checking that contractors had appropriate licenses, insurance, and no criminal records, in addition to collecting application fees and filing bonding information. After passing these checks and completing all the required applications, contractors were issued photo identification cards, which they were required to carry at all times while working within the city limits. To inform its citizens about this program, Grand Forks officials conducted press briefings urging residents to check for these photo identifications and to hire only credentialed contractors. In about 2 months, the city issued approximately 500 new contractor licenses and 2,000 contractor identification cards through the one-stop shop. During that same period, officials arrested more than 20 individuals who had outstanding warrants. City and state officials credited this approach with playing a key role in limiting contractor fraud in Grand Forks during the recovery from the Red River flood. In the wake of this year’s flooding, the city of Cedar Rapids, Iowa, has created a similar contractor credentialing program modeled after Grand Forks’ One-Stop-Shop program, in an effort to minimize instances of contractor fraud. Cedar Rapid’s program requires contractors to visit a local mall where representatives from the police department and community development, and code enforcement divisions are assembled. There, city officials check contractors’ licenses and insurance policies, as well as conducting criminal background checks. Similar to Grand Forks’ program, contractors who pass checks are issued photo identification cards. Those who do not obtain identification before working in the area can incur a fine of $100 or face up to 30 days of jail time. As of August 2008, over 900 local and out-of-town contracting companies and 6,200 individual contractors have been credentialed through this program. Twelve people have been arrested as a result of outstanding warrants that were identified through criminal background checks. Our prior work on FEMA’s Individuals and Households Program payments and the Department of Homeland Security’s purchase card program show that fraud, waste, and abuse related to disaster assistance in the wake of the 2005 Gulf Coast hurricanes are significant. We have previously estimated improper and potentially fraudulent payments related to the Individuals and Households Program application process to be approximately $1 billion of the first $6 billion provided. In addition, FEMA provided nearly $20 million in duplicate payments to individuals who registered and received assistance twice by using the same Social Security number and address. Similarly, the Hurricane Katrina Fraud Task Force—comprised of the Department of Justice’s Criminal Division and Offices of the United States Attorneys; several other federal agencies, including the Federal Bureau of Investigations, Secret Service, and Securities and Exchange Commission; and various representatives of state and local law enforcement—have collaborated to prosecute instances of fraud related to the hurricane. According to the Office of the Federal Coordinator of Gulf Coast Recovery, the efforts of the task force have resulted in the indictment of over 890 cases of fraud to date. Because of the role state governments play in distributing and allocating this federal assistance, these known vulnerabilities call for states to establish effective controls to minimize opportunities for individuals to defraud the government. With the need to provide assistance quickly and expedite purchases, programs without effective fraud prevention controls can end up losing millions or potentially billions of dollars to fraud, waste, and abuse. We have previously testified on the need for fraud prevention controls, fraud detection, monitoring adherence to controls throughout the entire program life, collection of improper payments, and aggressive prosecution of individuals committing fraud. These controls are crucial whether dealing with programs to provide housing and other needs assistance or other recovery efforts. By creating such a fraud protection framework—especially the adoption of fraud prevention controls— government programs should not have to make a choice between the speedy delivery of disaster recovery assistance and effective fraud protection. While receiving millions of dollars in federal assistance, state and local governments bear the main responsibility for helping communities cope with the destruction left in the wake of major disasters. Now that the wind and storm surge from Hurricanes Ike and Gustav have passed and the Midwest flood waters have subsided, state and local governments face a myriad of decisions regarding the short- and long-term recovery of their communities. We have seen that actions taken shortly after a major disaster and during the early stages of the recovery process can have a significant impact on the success of a community’s long-term recovery. Accordingly, this is a critical time for communities affected by these major disasters. Insights drawn from state and local governments that have experienced previous major disasters may provide a valuable opportunity for officials to anticipate challenges and adopt appropriate strategies and approaches early on in the recovery process. There is no one right way for how state and local governments should manage recovery from a major disaster, nor is there a recipe of techniques that fits all situations. While many of the practices we describe in this report were tailored to the specific needs and conditions of a particular disaster, taken together, they can provide state and local officials with a set of tools and approaches to consider as they move forward in the process of recovering from major disasters. We provided a draft of this report to the Federal Coordinator of Gulf Coast Recovery in the Department of Homeland Security. In addition, we provided drafts of the relevant sections of this report to officials involved in the particular practices we describe, as well as experts in disaster recovery. They generally agreed with the contents of this report. We have incorporated their technical comments as appropriate. We are sending copies of this report to other interested congressional committees, the Secretary of Homeland Security, the FEMA Administrator, and state and local officials affected by Hurricanes Ike and Gustav as well as the Midwest floods. We will make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Gulf Coast Rebuilding: Observations on Federal Financial Implications. GAO-07-1079T. Washington. D.C.: August 2, 2007. Preliminary Information on Rebuilding Efforts in the Gulf Coast. GAO-07-809R. Washington, D.C.: June 29, 2007. Gulf Coast Rebuilding: Preliminary Observations on Progress to Date and Challenges for the Future. GAO-07-574T. Washington, D.C.: April 12, 2007. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Hurricane Katrina: Providing Oversight of the Nation’s Preparedness, Response, and Recovery Activities. GAO-05-1053T. Washington, D.C.: September 28, 2005. Department of Agriculture, Farm Service Agency: 2005 Section 32 Hurricane Disaster Programs; 2006 Livestock Assistance Grant Program. GAO-07-715R. Washington, D.C.: April 16, 2007. Department of Agriculture, Commodity Credit Corporation: 2006 Emergency Agricultural Disaster Assistance Programs. GAO-07-511R. Washington, D.C.: April 16, 2007. Small Business Contracting: Observations from Reviews of Contracting and Advocacy Activities of Federal Agencies. GAO-07-1255T. Washington, D.C.: September 26, 2007. Hurricane Katrina: Agency Contracting Data Should Be More Complete Regarding Subcontracting Opportunities for Small Business. GAO-07-698T. Washington, D.C.: April 12, 2007. Hurricane Katrina: Agency Contracting Data Should Be More Complete Regarding Subcontracting Opportunities for Small Businesses. GAO-07-205. Washington, D.C.: March 1, 2007. Hurricane Katrina: Improving Federal Contracting Practices in Disaster Recovery Operations. GAO-06-714T. Washington, D.C.: May 4, 2006. Hurricane Katrina: Army Corps of Engineers Contract for Mississippi Classrooms. GAO-06-454. Washington, D.C.: May 1, 2006. Hurricane Katrina: Planning for and Management of Federal Disaster Recovery Contracts. GAO-06-622T. Washington, D.C.: April 10, 2006. Hurricanes Katrina and Rita: Preliminary Observations on Contracting for Response and Recovery Efforts. GAO-06-246T. Washington, D.C.: November 8, 2005. Hurricanes Katrina and Rita: Contracting for Response and Recovery Efforts. GAO-06-235T. Washington, D.C.: November 2, 2005. Hurricane Katrina: Ineffective FEMA Oversight of Housing Maintenance Contracts in Mississippi Resulted in Millions of Dollars of Waste and Potential Fraud. GAO-08-106. Washington, D.C.: November 16, 2007. Hurricanes Katrina and Rita Disaster Relief: Continued Findings of Fraud, Waste, and Abuse. GAO-07-300. Washington, D.C.: March 15, 2007. Hurricanes Katrina and Rita Disaster Relief: Prevention Is the Key to Minimizing Fraud, Waste, and Abuse in Recovery Efforts. GAO-07-418T. Washington, D.C.: January 29, 2007. Response to a post hearing question related to GAO’s December 6, 2006 testimony on continued findings of fraud, waste, and abuse associated with Hurricanes Katrina and Rita relief efforts. GAO-07-363R. Washington, D.C.: January 12, 2007. Hurricanes Katrina and Rita Disaster Relief: Continued Findings of Fraud, Waste, and Abuse. GAO-07-252T. Washington, D.C.: December 6, 2006. Purchase Cards: Control Weaknesses Leave DHS Highly Vulnerable to Fraudulent, Improper, and Abusive Activity. GAO-06-1117. Washington, D.C.: September 28, 2006. Hurricanes Katrina and Rita: Unprecedented Challenges Exposed the Individuals and Households Program to Fraud and Abuse; Actions Needed to Reduce Such Problems in Future. GAO-06-1013. Washington, D.C.: September 27, 2006. Disaster Relief: Governmentwide Framework Needed to Collect and Consolidate Information to Report on Billions in Federal Funding for the 2005 Gulf Coast Hurricanes. GAO-06-834. Washington, D.C.: September 5, 2006. Individual Disaster Assistance Programs: Framework for Fraud Prevention, Detection, and Prosecution. GAO-06-954T. Washington, D.C.: July 12, 2006. Expedited Assistance for Victims of Hurricanes Katrina and Rita: FEMA’s Control Weaknesses Exposed the Government to Significant Fraud and Abuse. GAO-06-655. Washington, D.C.: June 16, 2006. Hurricanes Katrina and Rita Disaster Relief: Improper and Potentially Fraudulent Individual Assistance Payments Estimated to Be Between $600 Million and $1.4 Billion. GAO-06-844T. Washington, D.C.: June 14, 2006. Expedited Assistance for Victims of Hurricanes Katrina and Rita: FEMA’s Control Weaknesses Exposed the Government to Significant Fraud and Abuse. GAO-06-403T. Washington, D.C.: February 13, 2006. Disaster Housing: Implementation of FEMA’s Alternative Housing Pilot Program Provides Lessons for Improving Future Competitions. GAO-07-1143R. Washington, D.C.: August 31, 2007. Disaster Assistance: Better Planning Needed for Housing Victims of Catastrophic Disasters. GAO-07-88. Washington, D.C.: February 28, 2007. Hurricane Katrina: Continuing Debris Removal and Disposal Issues. GAO-08-985R. Washington, D.C. August 25, 2008. Hurricane Katrina: Trends in the Operating Results of Five Hospitals in New Orleans before and after Hurricane Katrina. GAO-08-681R. Washington, D.C.: July 17, 2008. Hurricane Katrina: EPA’s Current and Future Environmental Protection Efforts Could Be Enhanced by Addressing Issues and Challenges Faced on the Gulf Coast. GAO-07-651. Washington, D.C.: June 25, 2007. Hurricane Katrina: Allocation and Use of $2 Billion for Medicaid and Other Health Care Needs. GAO-07-67. Washington, D.C.: February 28, 2007. Hurricanes Katrina and Rita: Federal Actions Could Enhance Preparedness of Certain State-Administered Federal Support Programs. GAO-07-219. Washington, D.C.: February 7, 2007. Hurricane Katrina: Status of Hospital Inpatient and Emergency Departments in the Greater New Orleans Area. GAO-06-1003. Washington, D.C.: September 29, 2006. Child Welfare: Federal Action Needed to Ensure States Have Plans to Safeguard Children in the Child Welfare System Displaced by Disasters. GAO-06-944. Washington, D.C.: July 28, 2006. Lessons Learned for Protecting and Educating Children after the Gulf Coast Hurricanes. GAO-06-680R. Washington, D.C.: May 11, 2006. Hurricane Katrina: Status of the Health Care System in New Orleans and Difficult Decisions Related to Efforts to Rebuild It Approximately 6 Months After Hurricane Katrina. GAO-06-576R. Washington, D.C.: March 28, 2006. Army Corps of Engineers: Known Performance Issues with New Orleans Drainage Canal Pumps Have Been Addressed, but Guidance on Future Contracts Is Needed. GAO-08-288. Washington, D.C.: December 31, 2007. U.S. Army Corps of Engineers’ Procurement of Pumping Systems for the New Orleans Drainage Canals. GAO-07-908R. Washington, D.C.: May 23, 2007. Hurricane Katrina: Strategic Planning Needed to Guide Future Enhancements Beyond Interim Levee Repairs. GAO-06-934. Washington, D.C.: September 6, 2006. Small Business Administration: Response to the Gulf Coast Hurricanes Highlights Need for Enhanced Disaster Preparedness. GAO-07-484T. Washington, D. C.: February 14, 2007. Small Business Administration: Additional Steps Needed to Enhance Agency Preparedness for Future Disasters. GAO-07-114. Washington, D.C.: February 14, 2007. Small Business Administration: Actions Needed to Provide More Timely Disaster Assistance. GAO-06-860. Washington, D.C.: July 28, 2006. Gulf Opportunity Zone: States Are Allocating Federal Tax Incentives to Finance Low-Income Housing and a Wide Range of Private Facilities. GAO-08-913. Washington, D.C.: July 16, 2008. Tax Compliance: Some Hurricanes Katrina and Rita Disaster Assistance Recipients Have Unpaid Federal Taxes. GAO-08-101R. Washington, D.C.: November 16, 2007. Disaster Assistance: Guidance Needed for FEMA’s ‘Fast Track’ Housing Assistance Process. RCED-98-1. Washington, D.C.: October 17, 1997. Disaster Assistance: Improvements Needed in Determining Eligibility for Public Assistance. RCED-96-113. Washington, D.C.: May 23, 1996. Emergency Relief: Status of the Replacement of the Cypress Viaduct. RCED-96-136. Washington, D.C.: May 6, 1996. Disaster Assistance: Information on Expenditures and Proposals to Improve Effectiveness and Reduce Future Costs. T-RCED-95-140. Washington, D.C.: March 16, 1995. GAO Work on Disaster Assistance. RCED-94-293R. Washington, D.C.: August 31, 1994. Los Angeles Earthquake: Opinions of Officials on Federal Impediments to Rebuilding. RCED-94-193. Washington, D.C.: June 17, 1994. Earthquake Recovery: Staffing and Other Improvements Made Following Loma Prieta Earthquake. RCED-92-141. Washington, D.C.: July 30, 1992. Transportation Infrastructure: The Nation’s Highway Bridges Remain at Risk From Earthquakes. RCED-92-59. Washington, D.C.: January 23, 1992. Loma Prieta Earthquake: Collapse of the Bay Bridge and the Cypress Viaduct. RCED-90-177. Washington, D.C.: June 19, 1990. Disaster Assistance: Program Changes Expedited Delivery of Individual and Family Grants. RCED-89-73. Washington, D.C.: April 4, 1989. National Flood Insurance Program: Financial Challenges Underscore Need for Improved Oversight of Mitigation Programs and Key Contracts. GAO-08-437. Washington, D.C.: June 16, 2008. Natural Catastrophe Insurance: Analysis of a Proposed Combined Federal Flood and Wind Insurance Program. GAO-08-504. Washington, D.C.: April 25, 2008. National Flood Insurance Program: Greater Transparency and Oversight of Wind and Flood Damage Determinations Are Needed. GAO-08-28. Washington, D.C.: December 28, 2007. Natural Disasters: Public Policy Options for Changing the Federal Role in Natural Catastrophe Insurance. GAO-08-7. Washington, D.C.: November 26, 2007. Federal Emergency Management Agency: Ongoing Challenges Facing the National Flood Insurance Program. GAO-08-118T. Washington, D.C.: October 2, 2007. National Flood Insurance Program: FEMA’s Management and Oversight of Payments for Insurance Company Services Should Be Improved. GAO-07-1078. Washington, D.C.: September 5, 2007. National Flood Insurance Program: Preliminary Views on FEMA’s Ability to Ensure Accurate Payments on Hurricane-Damaged Properties. GAO-07-991T. Washington, D.C.: June 12, 2007. National Flood Insurance Program: New Processes Aided Hurricane Katrina Claims Handling, but FEMA’s Oversight Should Be Improved. GAO-07-169. Washington, D.C.: December 15, 2006. Federal Emergency Management Agency: Challenges for the National Flood Insurance Program. GAO-06-335T. Washington, D.C.: January 25, 2006. Federal Emergency Management Agency: Challenges Facing the National Flood Insurance Program. GAO-06-174T. Washington, D.C.: October 18, 2005. Catastrophe Risk: U.S. and European Approaches to Insure Natural Catastrophe and Terrorism Risks. GAO-05-199. Washington, D.C.: February 28, 2005. Catastrophe Insurance Risks: The Role of Risk-Linked Securities. GAO-03-195T. Washington, D.C.: October 8, 2002. Catastrophe Insurance Risks: The Role of Risk-Linked Securities and Factors Affecting Their Use. GAO-02-941. Washington, D.C.: September 24, 2002. Federal Disaster Insurance. GGD-95-20R. Washington, D.C.: November 7, 1994. Federal Disaster Insurance: Goals Are Good, But Insurance Programs Would Expose the Federal Government to Large Potential Losses. T-GGD-94-153. Washington, D.C.: May 26, 1994. In addition to the individual named above, Peter Del Toro, Assistant Director; Patrick Breiding; Michael Brostek; Keya Chateauneuf; Thomas Gilbert; Shirley Hwang; Gregory Kutz; Donna Miller; John Mingus; MaryLynn Sergent; and Diana Zinkl made key contributions to this report.
This month, Hurricanes Ike and Gustav struck the Gulf Coast producing widespread damage and leading to federal major disaster declarations. Earlier this year, heavy flooding resulted in similar declarations in seven Midwest states. In response, federal agencies have provided millions of dollars in assistance to help with short- and long-term recovery. State and local governments bear the primary responsibility for recovery and have a great stake in its success. Experiences from past disasters may help them better prepare for the challenges of managing and implementing the complexities of disaster recovery. GAO was asked to identify insights from past disasters and share them with state and local officials undertaking recovery activities. GAO reviewed six past disasters-- the Loma Prieta earthquake in northern California (1989), Hurricane Andrew in south Florida (1992), the Northridge earthquake in Los Angeles, California (1994), the Kobe earthquake in Japan (1995), the Grand Forks/Red River flood in North Dakota and Minnesota (1997), and Hurricanes Katrina and Rita in the Gulf Coast (2005). GAO interviewed officials involved in the recovery from these disasters and experts on disaster recovery. GAO also reviewed relevant legislation, policies, and its previous work. While the federal government provides significant financial assistance after major disasters, state and local governments play the lead role in disaster recovery. As affected jurisdictions recover from the recent hurricanes and floods, experiences from past disasters can provide insights into potential good practices. Drawing on experiences from six major disasters that occurred from 1989 to 2005, GAO identified the following selected insights: (1) Create a clear, implementable, and timely recovery plan. Effective recovery plans provide a road map for recovery. For example, within 6 months of the 1995 earthquake in Japan, the city of Kobe created a recovery plan that identified detailed goals which facilitated coordination among recovery stakeholders. The plan also helped Kobe prioritize and fund recovery projects, in addition to establishing a basis for subsequent governmental evaluations of the recovery's progress. (2) Build state and local capacity for recovery. State and local governments need certain capacities to effectively make use of federal assistance, including having sufficient financial resources and technical know-how. State and local governments are often required to match a portion of the federal disaster assistance they receive. Loans provided one way for localities to enhance their financial capacity. For example, after the Red River flood, the state-owned Bank of North Dakota extended the city of Grand Forks a $44 million loan, which the city used to match funding from federal disaster programs and begin recovery projects. (3) Implement strategies for businesses recovery. Business recovery is a key element of a community's recovery. Small businesses can be especially vulnerable to major disasters because they often lack resources to sustain financial losses. Federal, state, and local governments developed strategies to help businesses remain in the community, adapt to changed market conditions, and borrow funds at lower interest rates. For example, after the Loma Prieta earthquake, the city of Santa Cruz erected large pavilions near the main shopping street. These structures enabled more than 40 local businesses to operate as their storefronts were repaired. As a result, shoppers continued to frequent the downtown area thereby maintaining a customer base for impacted businesses. (4) Adopt a comprehensive approach toward combating fraud, waste, and abuse. The influx of financial assistance after a major disaster provides increased opportunities for fraud, waste, and abuse. Looking for ways to combat such activities before, during, and after a disaster can help states and localities protect residents from contractor fraud as well as safeguard the financial assistance they allocate to victims. For example, to reduce contractor fraud after the Red River flood, the city of Grand Forks established a credentialing program that issued photo identification to contractors who passed licensing and criminal checks.
You are an expert at summarizing long articles. Proceed to summarize the following text: Eradication of infectious diseases involves reducing worldwide incidence to zero, thereby obviating the need for further control measures. Elimination of infectious diseases involves reducing morbidity to a level at which they are no longer considered major public health problems. Elimination still requires a basic level of control and surveillance. Global disease eradication and elimination campaigns are initiated, primarily by WHO, to concentrate and mobilize resources from both affected and donor countries. WHO provides recommendations for disease eradication and elimination to its governing body, the World Health Assembly, based on two general criteria—scientific feasibility and the level of political support by endemic and donor countries. Formal campaigns were initiated for dracunculiasis and leprosy in 1991, and for polio and lymphatic filariasis in 1988 and 1997, respectively. Regional or subregional campaigns are also underway against measles, onchocerciasis, and Chagas’ disease. Disease eradication and elimination efforts are normally implemented by national governments of the affected countries. Developing countries typically receive assistance from bilateral and multilateral donors, nongovernmental organizations, and the private sector. Developing costs and time frames for these efforts is difficult due to challenges in gathering and verifying data from countries with minimal health infrastructure. Unpredictable and unstable country conditions, such as civil unrest, further complicate efforts to project how much these efforts will cost and how much time is needed. The appendix at the end of this statement provides a breakdown of costs and time frames for eradicating or eliminating each disease. WHO’s cost and time frame estimates, with the exception of measles, addressed all five of the relevant factors. However, the completeness of the data underlying the estimates varies by disease. Generally, estimates for those diseases with long-standing eradication or elimination campaigns are more complete, as the underlying data are based on actual experience in endemic countries. For the other diseases, WHO is still gathering data and refining its assumptions. Estimates for diseases with target dates of 5 years or longer are more speculative due to incomplete data and the difficulty in predicting sustained commitment and stable country conditions. We will briefly discuss the estimates for each disease and the barriers to be overcome. WHO’s cost estimate of $40 million for eradicating dracunculiasis included data on each of the five key factors and appears to be sound. Community-based programs to control this disease have been underway since 1980. Continuing civil unrest in some endemic areas of Africa precluded meeting the original 1995 target date for eradication. WHO now expects that all countries except Nigeria and Sudan will be free of dracunculiasis by 2005 at the latest, assuming safe access and appropriate funding. WHO’s cost estimate includes certification costs that will continue until 2011. The Centers for Disease Control and Prevention (CDC) and officials from the Carter Center believe that some country-level eradication goals may be met even sooner than WHO estimated. The main barrier to eradication is ongoing civil strife in the endemic region. Experts also point to the need for continued national and donor support. WHO’s cost estimate of $1.6 billion for eradicating polio is generally sound. It includes well-developed data on all five factors based on experience in controlling the disease. Many countries began polio vaccinations in the 1970s and 1980s. Most experts agreed that global interruption of the wild polio virus will occur by 2002 or shortly thereafter. Global certification is to take place about 3 years after the last case is reported—probably around 2005. However, some experts have raised concerns about the ability of less developed countries to maintain the required level of polio vaccinations and surveillance until eradication is achieved. In addition, WHO is concerned about the ability of some countries to detect and report acute flaccid paralysis, a key component of polio surveillance. According to WHO, unless sufficient resources are mobilized to improve detection capability, eradication cannot be certified. WHO’s cost estimate of $225 million for eliminating leprosy includes well-developed data on all key elements and appears to be sound. The current elimination strategy is based on the multidrug therapy begun in 1981. Endemic countries have made great progress toward eliminating leprosy since that time, but some challenges remain. WHO noted that it is possible that some countries with concentrated pockets of leprosy might need to continue campaigns beyond the target year 2000 to reach the global leprosy elimination target of less than 1 case per 10,000 people. In addition, ongoing civil strife in endemic areas and difficult country conditions may preclude meeting all targets. Also, since leprosy patients are often ostracized and hidden, case identification is difficult. However, experts generally agreed that WHO’s cost and time frame estimates appeared reasonable. WHO’s estimate of $4.9 billion for global measles eradication by 2010 is speculative. While vaccine costs are well known, we found several areas in which the current estimates may be low or based on incomplete data. Essentially, data are incomplete regarding the number of children to be vaccinated, administrative costs, the number of mass campaigns that may be needed, and the costs of surveillance in less developed countries. Finally, the estimate does not include certification costs. WHO officials noted that they used information from their previous experience with polio eradication in developing the measles estimate. The vaccine administration and surveillance costs for polio are adjusted upward to account for difficulties in administering an injectable rather than an oral vaccine. Many international health experts believe that measles is the next candidate for a formal global eradication effort, pointing to some successes in controlling measles in the Americas as well as support from developing countries where measles is a major cause of mortality among children. However, experts also point out that there are some challenges to eradicating measles by 2010. Measles is highly contagious, requiring higher routine vaccination rates than smallpox and polio. In addition, outbreaks can occur even in areas of high routine vaccination coverage. Furthermore, costly mass campaigns are necessary to catch those still susceptible after routine vaccination because the vaccine is not 100-percent effective. Finally, some industrialized countries do not perceive measles to be a major public health problem and have not initiated measles elimination efforts. More than half of the estimated cost of measles eradication is expected to be incurred by developed countries. WHO and CDC estimate the cost of eradicating measles in less developed countries at up to $1.8 billion. WHO’s estimate of $143 million for eliminating onchocerciasis is speculative. It incorporates data on all key cost elements, but data on the size of the target population are incomplete. The control programs for West Africa and Latin America have been ongoing for a period of time and are likely to reach their elimination targets within or near the costs and target dates estimated by WHO. However, WHO is still mapping disease prevalence for the 19 African countries in the most recent control program. WHO’s earlier estimates may have underestimated the population eligible for treatment upon which the cost and time frame estimates were based. For example, the latest estimate for those to be treated in this area is 42 million, compared to the original estimate of 35 million. Also, WHO does not yet have a reliable estimate on the number to be treated in the Democratic Republic of the Congo (formerly Zaire). Although WHO included data on all cost factors, the $391 million estimated for eliminating Chagas’ disease is speculative for two reasons—not all countries have submitted estimates, and countries that are targeted for elimination of Chagas’ disease by 2010 only submitted estimates through 2005. The first regional program began in the southern portion of South America in 1991. Data from this region are more complete, and the program appears to be on track for completion by 2005. However, the efforts in the Central American and Andean countries only began last year and are targeted for completion by 2010. Costs and time frames in these countries are less certain because three countries have not submitted cost estimates or prevalence and incidence data, and all countries submitted cost data only through 2005. The $228 million estimated for eliminating lymphatic filariasis is very speculative. While all cost factors were addressed in the estimates, the data are very preliminary. WHO has limited historical data on costs because formal campaigns have only recently begun in some of the 73 countries in which lymphatic filariasis is known to be endemic. Also, WHO has not yet completed its assessments to establish the number of people to be treated in endemic countries and to determine whether there are other endemic countries. The United States currently spends about $391 million a year on these diseases. This includes about $300 million on polio and measles prevention programs and leprosy treatment in the United States and about another $91 million abroad for all the diseases under discussion except leprosy. Most of this amount would be saved if eradication and elimination goals were met and efforts to combat the diseases were ceased or reduced. The overall savings to the United States if polio were eradicated are estimated to be at least $304 million a year. This includes about $230 million in public and private expenditures—including administration—for controlling polio within U.S. borders and about $74 million for the global eradication efforts. CDC estimates that an additional $20 million will be spent in the United States each year due to a 1996 CDC recommendation to administer two doses of the more expensive injectable vaccine before the two doses of oral vaccine. The overall savings to the United States as a result of eradicating measles are estimated at about $61.7 million a year, including about $50 million for domestic vaccine costs and about $11.7 million for global measles efforts. The $50 million only includes the cost of the vaccine and not administration expenses. Immunization against measles is included in the vaccine for mumps and rubella, and the United States would continue administering the mumps and rubella vaccine even if measles were eradicated. Additional savings would be realized from preventing periodic measles epidemics in the United States. CDC estimates that the last measles epidemic of 1989-1991 cost $150 million. The United States spends about $25 million a year for the other five diseases. The U.S. Department of Health and Human Services spends about $20 million a year to treat a small number of leprosy patients in the United States. The U.S. Agency for International Development (USAID) funds the dracunculiasis effort at $500,000 a year and the onchocerciasis control programs at $3.5 million a year. CDC spends more than $1 million for overseas efforts against dracunculiasis, Chagas’ disease, and lymphatic filariasis. The United States does not currently track domestic costs related to Chagas’ disease. However, U.S. blood banks may begin screening donated blood for the disease due to a significant number of infected Latin American immigrants in certain areas. An American Red Cross official estimated that this would cost about $25 million a year. International public health experts identified several diseases that pose health threats to the United States and that are technically possible to eradicate with existing vaccines: rubella, mumps, hepatitis B, and Hib. CDC suggested rubella and mumps could be considered as part of the measles eradication effort, since vaccinations against all three are often administered in one trivalent shot. CDC estimated that the United States spends about $255.5 million a year in administering this vaccination. Rubella is considered a significant health burden in the form of birth defects and is being discussed as an eradication initiative for the Americas. However, health experts generally believe that the costs to eradicate mumps would be difficult to justify because the global burden is considered low. The primary challenges in eradicating rubella and mumps are diagnostic difficulties and the additional costs that would be incurred. Hepatitis B is considered a possible candidate because the vaccine is effective and relatively inexpensive, and a good diagnostic tool is available. Hepatitis B is viewed as a major public health threat, causing almost 1.2 million deaths per year, usually from liver cancer or chronic liver disease. CDC estimates that the U.S. public and private sectors spend from $308 million to $383 million a year for hepatitis B vaccines alone. The major barrier to an eradication initiative is that some people are chronic carriers and would have to die before the disease could be considered eradicated. Hib is a bacterial infection that is the most common cause of childhood meningitis. About 400,000 to 700,000 children in developing countries die each year from the disease. U.S. public and private sectors spend about $162 million a year on Hib vaccines. According to CDC, Hib has potential for eradication, but more needs to be known about the vaccine before this disease could be an eradication candidate. WHO told us that rubella, hepatitis B, and Hib could be eventual candidates for eradication due to their associated public health burdens and the success in controlling these diseases in some parts of the world. However, they noted that, due to the high costs associated with eradication efforts, it is important to limit the number of ongoing efforts, and they do not support adding campaigns at this time. As the first and only disease to be eradicated through human intervention, smallpox is used as evidence that disease eradication is technically feasible. According to some experts, the smallpox effort yielded lessons that have since been applied to other efforts, such as the role of surveillance and the ability to garner resources for massive campaigns. It also showed that eradication can be cost-effective. Using 1967 estimated smallpox costs as a baseline measure and adjusting for annual birth rates, we estimated the cumulative present value global savings in 1997 dollars for the period 1978-1997 at $168 billion. For the United States, cumulative savings from smallpox eradication are estimated at almost $17 billion. The United States spent about $610 million in 1997 dollars for domestic smallpox control in 1968 and about $130 million in 1997 dollars during 1968-1977 on the overseas eradication effort. We estimated the annual real rate of return for the United States at about 46 percent a year since smallpox was eradicated. Smallpox had characteristics that experts consider desirable for eradication. The disease was easily diagnosed, and all infection resulted in visible symptoms. The vaccine was effective in only one dose, stable in heat, and inexpensive. Polio and measles share many of the desirable eradication characteristics of smallpox. Both diseases are caused by viral agents, are found only in humans, and have effective interventions available that provide long-lasting immunity. However, certain differences exist that may limit the usefulness of smallpox as a model for other eradication efforts. Smallpox was less infectious than either polio or measles and required less immunization coverage. Polio and measles require mass campaigns in addition to routine coverage to interrupt virus transmission. Polio and measles are also difficult to diagnose without laboratory confirmation. The vast majority of polio infections show no symptoms, and the typical paralytic manifestations of polio can be due to other causes. Dracunculiasis differs from smallpox in that is a parasitic disease and not vaccine preventable. However, like smallpox, it is vulnerable to eradication because the interventions are inexpensive and effective, and the infection is easily diagnosed. Mr. Chairman, this concludes our prepared remarks. We would be happy to answer any questions you or the Committee members may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the World Health Organization's (WHO) estimates for eradicating or eliminating seven infectious diseases--dracunculiasis, polio, leprosy, measles, onchoceriasis, Chagas' disease, and lymphatic filariasis--worldwide, focusing on: (1) the soundness of WHO's cost and timeframe estimates; (2) U.S. spending related to these diseases in fiscal year 1997 and any potential cost savings to the United States as a result of eradication or elimination; (3) other diseases that international health experts believe pose a risk to Americans and could be eventual candidates for eradication; and (4) U.S. costs and savings from smallpox eradication and whether experts view smallpox eradication as a model for other diseases. GAO noted that: (1) WHO and other experts it contacted generally agreed on five factors necessary to estimate the cost of eradicating or eliminating a disease: (a) product costs; (b) information on disease incidence, prevalence, and the size of the target populations; (c) administrative and delivery costs; (d) disease monitoring and surveillance costs; and (e) primarily for eradication, the costs of certifying that countries are free of the disease; (2) GAO focused its assessment on the accuracy and completeness of the underlying data for these five factors; (3) WHO's estimates and GAO's analysis did not include an assessment of opportunity or indirect costs that may be incurred as a result of eradication campaigns; (4) the soundness of WHO's cost and timeframes varied by disease; (5) generally, the estimates were most sound for those diseases closest to meeting eradication or elimination goals, including dracunculiasis, polio, and leprosy; (6) estimates for these three diseases were based on firm data about target populations and intervention costs from ongoing initiatives; (7) for the other diseases, WHO's estimates are more speculative because underlying data are incomplete or unavailable; (8) WHO officials acknowledged this fact and said that estimates are continuously revised as better data become available; (9) the United States spent about $391 million in 1997 to combat these diseases; (10) the United States spent $300 million on polio and measles prevention and on leprosy treatment in this country; (11) about another $91 million went for overseas programs, primarily the polio eradication campaign; (12) savings to the United States from eradicating or eliminating these diseases would result primarily from not having to vaccinate U.S. children against polio and measles; (13) experts GAO contacted identified four other diseases that pose health threats to the United states and could be possible candidates for eradication; (14) WHO told GAO that, while it may be technically possible to eradicate these diseases with existing vaccines, the international community cannot support too many eradication initiatives at one time; (15) the United States has saved almost $17 billion as a result of the eradication of smallpox in 1977; (16) the savings were due to the cessation of vaccinations and related costs of surveillance and treatment; (17) experts generally agreed that the primary lesson from smallpox is that a disease can actually be eradicated; and (18) however, smallpox had unique characteristics that made it particularly vulnerable to eradication and therefore has limitations as a model for current efforts.
You are an expert at summarizing long articles. Proceed to summarize the following text: Because large numbers of Americans lack knowledge about basic personal economics and financial planning, U.S. policymakers and others have focused on financial literacy, i.e., the ability to make informed judgments and to take effective actions regarding the current and future use and management of money. While informed consumers can choose appropriate financial investments, products, and services, those who exercise poor money management and financial decision making can lower their family’s standard of living and interfere with their crucial long- term goals. One vehicle for promoting the financial literacy of Americans is the congressionally created Financial Literacy and Education Commission. Created in 2003, the Commission is charged with (1) developing a national strategy to promote financial literacy and education for all Americans; (2) coordinating financial education efforts among federal agencies and among the federal, state, and local governments; nonprofit organizations; and private enterprises; and (3) identifying areas of overlap and duplication among federal financial literacy activities. Since at least the 1980s, the military services have offered PFM programs to help servicemembers address their financial conditions. Among other things, the PFM programs provide financial literacy training to servicemembers, particularly to junior enlisted personnel during their first months in the military. The group-provided financial literacy training is supplemented with other types of financial management assistance, often on a one-on-one basis. For example, servicemembers might obtain one-on- one counseling from staff in their unit or legal assistance attorneys at the installation. In May 2003, the Office of the Under Secretary of Defense for Personnel and Readiness, DOD’s policy office for the PFM programs, established its Financial Readiness Campaign, with objectives that include increasing personal readiness by, among other things, (1) increasing financial awareness and abilities and (2) increasing savings and reducing dependence on credit. The campaign attempted to accomplish these objectives largely by providing on-installation PFM program providers with access to national-level programs, products, and support. To minimize financial burdens on servicemembers, DOD has requested and Congress has increased cash compensation for active duty military personnel. For example, the average increases in military basic pay exceeded the average increases in private-sector wages for each of the 5 years prior to when we issued our April 2005 report. Also, in April 2003, Congress increased the family separation allowance from $100 per month to $250 per month and hostile fire/imminent danger pay from $150 per month to $225 per month for eligible deployed servicemembers. The family separation allowance is designed to provide compensation to servicemembers with dependents for the added expenses (e.g., extra childcare costs, automobile maintenance, or home repairs) incurred because of involuntary separations such as deployments in support of contingency operations like Operation Iraqi Freedom. Hostile fire/imminent danger pay provides special pay for “duty subject to hostile fire or imminent danger” and is designed to compensate servicemembers for physical danger. Iraq, Afghanistan, Kuwait, Saudi Arabia, and many other nearby countries have been declared imminent danger zones. In addition to these special pays, some or all income that active duty servicemembers earn in a combat zone is tax free. Data from DOD suggest that the financial conditions for deployed and nondeployed servicemembers and their families were similar. However, deployed servicemembers faced problems with the administration of an allowance as well as an inability to communicate with creditors. Additionally, some financial products marketed to servicemembers may negatively affect their financial condition. In a 2003 DOD-wide survey, servicemembers who were deployed for at least 30 days reported similar levels of financial health or problems as those who had not deployed. For example, an analysis of the responses for only junior enlisted personnel showed that 3 percent of the deployed group and 2 percent of the nondeployed group indicated that they were in “over their heads” financially; and 13 percent of the deployed group and 15 percent of the nondeployed group responded that they found it “tough to make ends meet but keeping your head above water” financially. Figure 1 shows estimates of financial conditions for all servicemembers based on their responses to this survey. These responses are consistent with the findings that we obtained in a survey of all PFM program managers and during our 13 site visits. In the survey of PFM program managers, about 21 percent indicated that they believed servicemembers are better off financially after a deployment; about 54 percent indicated that the servicemembers are about the same financially after a deployment; and about 25 percent believed the servicemembers are worse off financially after a deployment. Also, 90 percent of the 232 recently deployed servicemembers surveyed in our focus groups said that their financial situations either improved or remained about the same after a deployment. The 2003 DOD survey also asked servicemembers whether they had experienced three types of negative financial events: pressure by creditors, falling behind in paying bills, and bouncing two or more checks. Again, the findings for deployed and nondeployed servicemembers were similar. For example, 19 percent of the deployed group and 17 percent of the nondeployed group said they were pressured by creditors; 21 percent of the deployed group and 17 percent of the nondeployed group said they fell behind in paying bills; and 16 percent of the deployed group and 13 percent of the nondeployed group said they had bounced two or more checks. The special pays and allowances that some servicemembers receive when deployed, particularly to dangerous locations, may be one reason for the similar findings for the deployed and nondeployed groups. Deployment- related special pays and allowances can increase servicemembers’ total cash compensation by hundreds of dollars per month. Moreover, some or all income that servicemembers earn while serving in a combat zone is tax free. Deployed servicemembers experienced problems receiving their family’s separation allowance promptly and communicating with creditors and families. Regarding family separation allowance, DOD pay data for January 2005 showed that almost 6,000 of 71,000 deployed servicemembers who have dependents did not receive their family separation allowance in a timely manner. The family separation allowance of $250 per month is designed to compensate servicemembers for extra expenses (e.g., childcare costs) that result when they are involuntarily separated from their families. Delays in obtaining this allowance could cause undue hardship for some families faced with such extra expenses. We previously reported similar findings for the administration of family separation allowance to Army Reserve soldiers and recommended that the Secretary of the Army, in conjunction with the DOD Comptroller, clarify and simplify procedures and forms for implementing the family separation allowance entitlement policy. The services had different, sometimes confusing, procedures that servicemembers performed to obtain their family separation allowance. DOD officials suggested other factors to explain why some eligible servicemembers had not received their family separation allowance on a monthly basis. These factors included servicemembers might not have been aware of the benefit, they may not have filed the required eligibility form, or errors or delays might have occurred when their unit entered data into the pay system. In response to our recommendation that DOD take steps to correct the delayed payment of this allowance, DOD notified finance offices that they should emphasize the prompt processing of such transactions so that payment for the entitlement would begin within 30 days of deployment. Servicemembers may also experience financial difficulties as a result of communication constraints while deployed. For example, individuals in the focus groups for our April 2005 report suggested that deployed junior enlisted personnel sometimes had less access to the Internet than did senior deployed personnel, making it difficult for the former to keep up with their bills. In addition, some Army servicemembers told us that they (1) could not call stateside toll-free numbers because the numbers were inaccessible from overseas or (2) incurred substantial costs—sometimes $1 per minute—to call stateside creditors. Furthermore, in our March 2004 testimony, we documented some of the problems associated with mail delivery to deployed troops. Failure to avoid or promptly correct financial problems can result in negative consequences for servicemembers. These include increased debt for servicemembers, bad credit histories, and poor performance of their duties when distracted by financial problems. In our April 2005 report, we recommended and DOD partially concurred that DOD identify and implement steps to allow deployed servicemembers better communications with creditors. In their comments, DOD cited operational requirements as a reason that communications with creditors may not be appropriate. In addition, DOD noted that servicemembers should have extended absence plans for their personal finances to ensure that their obligations are covered. Some financial products may also negatively affect servicemembers’ financial conditions. For example, although servicemembers already receive substantial, low-cost government-sponsored life insurance, we found that a small group of companies sold products that combine life insurance with a savings fund. These products promised high returns but included provisions that reduced the likelihood that military purchasers would benefit. These products usually provided a small amount of additional death benefits and had much higher premiums than those for the government insurance. These products also had provisions to use accumulated savings to pay the insurance premiums if the servicemembers stopped making payments. Moreover, servicemembers were being marketed a securities product, known as a mutual fund contractual plan, which features higher up-front sales charges than other mutual fund products and has largely disappeared from the civilian marketplace. For both types of products, the servicemembers who stopped making regular payments in the early years paid higher sales charges and likely received lower returns than if they had invested in other products. Our November 2005 report made recommendations that included asking Congress to consider banning contractual plans and direct regulators to work cooperatively with DOD to develop appropriateness or suitability standards for financial products sold to servicemembers. We also recommended that regulators ensure that products being sold to servicemembers meet existing insurance requirements and that DOD and financial regulators take steps to improve information sharing between them. In response to the concerns over the products being marketed to servicemembers, securities and insurance regulators have begun cooperating with DOD to expand financial literacy. DOD has taken a number of steps to assist servicemembers with their financial concerns, including providing military-sponsored PFM training, establishing a Financial Readiness Campaign, providing command financial specialists, and using Armed Forces Disciplinary Control Boards. Servicemembers can also access resources available outside of DOD (see fig. 2). However, servicemembers and DOD are not fully utilizing some of this assistance. In addition, DOD does not have an oversight framework to assess the effectiveness of the steps taken to assist servicemembers. All four military services require PFM training for servicemembers, and the timing and location of the training varies by service. The Army begins this training at initial military, or basic, where soldiers receive 2 hours of PFM training. The training continues at Advanced Individual Training schools, where soldiers receive an additional 2 hours of training and at the soldiers’ first duty station, where they are to receive an additional 8 hours of PFM training. In contrast, Navy personnel receive 16 hours of PFM training during Advanced Individual Training; while, the Marine Corps and the Air Force begin training servicemembers on financial issues at their first duty stations. Events, such as deployments or permanent changes of station, can trigger additional financial management training for servicemembers. The length of this additional training and the topics covered can vary by installation and command. Unit leadership also may refer a servicemember for financial management training or counseling if the command is aware of an individual’s financial problems (e.g., abusing check-cashing privileges). Despite these policies, some servicemembers have not received the required training, but the extent to which the training is not received is unknown because servicewide totals are not always collected. The Army, which is the only service that collected installation-level PFM data, estimated that about 82 percent of its junior enlisted soldiers completed PFM training in fiscal year 2003. Some senior Army officers at visited installations acknowledged the need to provide PFM training to junior enlisted servicemembers, but also noted that deployment schedules limited the time available to prepare soldiers for their warfighting mission (e.g., firing a weapon). While some services reported taking steps to improve their monitoring of PFM training completion—an important output—they still do not address the larger issue of training outcomes, such as whether PFM training helps servicemembers manage their finances better. DOD’s Financial Readiness Campaign, which was launched in May 2003, supplements PFM programs offered by the individual services through Web-based sources developed with assistance from external organizations. The Under Secretary of Defense for Personnel and Readiness stated that the department initiated the campaign to improve the financial management resources available to servicemembers and their families and to stimulate a culture that values financial health and savings. The campaign allows installation-level providers of PFM programs to access national programs and services developed by federal agencies and nonprofit organizations. The primary tool of the Financial Readiness Campaign has been a Web site designed to assist PFM program managers in developing installation-level campaigns to meet the financial management needs of their local military community. This Web site is linked to the campaign’s 27 partner organizations (e.g., federal agencies, Consumer Federation of America, and service relief/aid societies) that have pledged to support DOD in implementing the Financial Readiness Campaign. DOD’s May 2004 assessment of the campaign noted, however, that installation-level PFM staffs had made minimal use of the campaign’s Web site. DOD campaign officials stated that it was early in implementation of campaign efforts and that they had been brainstorming ideas to repackage information given to PFM program managers, as well as servicemembers and their families. At the installation level, the military services provide command financial specialists, who are usually senior enlisted personnel trained by PFM program managers, to assist servicemembers with financial issues. These noncommissioned officers may perform the education and counseling role of the command financial specialist as a collateral or full-time duty. The Navy, Marine Corps, and Army use command financial specialists to provide unit assistance to servicemembers in financial difficulties. The Air Force does not use command financial specialists within the unit, but has the squadron First Sergeant provide first-level counseling. Individual servicemembers who require counseling beyond the capability of the command financial specialists or First Sergeants in the Air Force can see the installation’s PFM program manager or PFM staff. The PFM program manager is a professional staff member designated and trained to organize and execute financial planning and counseling programs for the military community. PFM program managers and staff offer individual financial counseling as well as group classes on financial issues. DOD provides free legal assistance on contracts and other financial documents at installations, but servicemembers do not make full use this assistance. For example, legal assistance attorneys may review purchase contracts for large items such as homes and cars. In addition, the legal assistance attorneys offer classes on varying financial issues including powers of attorney, wills, and divorces. However, legal assistance attorneys at the 13 installations we visited for our April 2005 report stated that servicemembers rarely seek their assistance before entering into financial contracts for goods or services such as purchasing cars or lifetime film developing. Instead, according to the attorneys, servicemembers are more likely to seek their assistance after encountering problems. For example, used car dealers offered low interest rates for financing a vehicle, but the contract stated that the interest rate could be converted to a higher rate later if the lender did not approve the loan. Servicemembers were later called to sign a new contract with a higher rate. By that time, some servicemembers found it difficult to terminate the transaction because their trade-in vehicles had been sold. Legal assistance attorneys, as well as other personnel in our interviews and focus groups, noted reasons why servicemembers might not take greater advantage of the free legal assistance before entering into business agreements. They stated that junior enlisted servicemembers who want their purchases or loans immediately may not take the time to visit the attorney’s office for such a review. Additionally, the legal assistance attorneys noted that some servicemembers feared information about their financial problems would get back to the command and limit their career progression. Each service has a relief or aid society designed to provide financial assistance to servicemembers. The Army Emergency Relief Society, Navy- Marine Corps Relief Society, and the Air Force Aid Society are all private, nonprofit organizations. These societies provide counseling and education as well as financial relief through grants or no-interest loans to eligible servicemembers experiencing emergencies. Emergencies include funds needed to attend the funeral of a family member, repair a primary vehicle, or buy food. For example, in 2003, the Navy-Marine Corps Relief Society provided $26.6 million in interest-free loans and $4.8 million in grants to servicemembers for emergencies. Some servicemembers in our focus groups stated that they would not use grants or no-interest loans from a service society because they take too long, are intrusive because the financial institution or relief/aid society requires in-depth financial information in the loan or grant application, or could be career limiting if the command found out the servicemembers were having financial problems. The Army Emergency Relief Society attempted to address the time and intrusiveness concerns with its test program, Commander’s Referral, for active duty soldiers lacking funds to meet monthly obligations of $500 or less. After the commander approves the loans, the servicemembers can expect to receive funds quickly. However, noncommissioned officers in our individual interviews and focus groups said the program still did not address servicemembers’ fears that revealing financial problems to the command could jeopardize their careers. Servicemembers may choose to use non-DOD resources if they do not want the command to be aware of their financial conditions or they need financial products or support not offered through DOD, the services, or the installation. In such cases, servicemembers may use other financial resources outside of DOD, which are available to the general public. These can include banks or credit unions for competitive rates on home or automobile loans, commercial Web sites for interest rate quotes on other consumer loans, consumer counseling for debt restructuring, and financial planners for advice on issues such as retirement planning. DOD has used Armed Forces Disciplinary Control Boards to help curb predatory lending practices and minimize their effects. These boards and the recommendations that they make to an installation commander to place businesses off-limits to servicemembers can be effective tools for avoiding or correcting unfair practices. However, data gathered during some of our site visits to the various installations revealed few times when the boards were used to address predatory lending practices. For example, the board at Fort Drum, New York, had not met in about 4 years, and the board’s director was unaware of two lawsuits filed by the New York Attorney General that involved Fort Drum servicemembers. The Attorney General settled a lawsuit in 2004 on behalf of 177 plaintiffs— most of whom were Fort Drum servicemembers—involving a furniture store that had improperly garnished wages pursuant to unlawful agreements it had required customers to sign at the time of purchase. The Attorney General filed a lawsuit in 2004 involving catalog sales stores. He characterized the stores as payday-lending firms that charged excessive interest rates on loans disguised as payments toward catalog purchases. Some servicemembers and family members at Fort Drum fell prey to this practice. The Attorney General stated that he found it particularly troubling that two of the catalog stores were located near the Fort Drum gate. In contrast to the Fort Drum situations, businesses near two other installations we visited changed their lending practices after boards recommended that commanders place or threaten to place the businesses on off-limits lists. Despite such successes, boards might not be used as a tool for dealing with predatory lenders for a variety of reasons. For example, as a result of high deployments, commanders may minimize some administrative duties, such as convening the boards, to use their personnel for other purposes. In addition, the boards may have little basis to recommend placing or threatening to place businesses on the list if the lenders operate within state laws. Furthermore, significant effort may be required to put businesses on off-limits lists. While recognizing these limitations, in our April 2005 report we nonetheless recommended that all Armed Forces Disciplinary Control Boards be required to meet twice a year. In responding to our recommendation, DOD indicated that it intended to establish a requirement for the boards to meet even more frequently—four times a year—and direct that businesses on the off-limits list for one service be off-limits for all services. Although DOD has made resources available to assist servicemembers, it lacks the results-oriented, departmentwide data needed to assess the effectiveness of its PFM programs and provide necessary oversight. The November 2004 DOD instruction that provides guidance to the services on servicemembers’ financial management does not address program evaluation or the reports that services should supply to DOD for its oversight role. In our 2003 report, we noted that an earlier draft of the instruction emphasized evaluating the programs and cited metrics such as the number of servicemembers with wages garnished. DOD officials said that these metrics were eliminated because the services did not want the additional reporting requirements. The only DOD-wide evaluative data available for assessing the PFM programs and servicemembers’ financial conditions were obtained from a general-purpose annual survey that focuses on the financial conditions of servicemembers as well as a range of other unrelated issues. The data were limited because (1) DOD policy officials for the PFM programs can only include a few financial-related items to this general purpose survey, (2) a response rate of 35 percent on a March 2003 active duty survey leads to questions about the generalizability of the findings, and (3) DOD has no means for confirming the self-reported information for survey items that ask about objective events such as filing for bankruptcy. Without an oversight framework requiring common evaluation DOD-wide and reporting relationships among DOD and the services, DOD and Congress do not have the visibility or oversight they need to assess the effectiveness of DOD’s financial management training and assistance to servicemembers. In response to a recommendation in our April 2005 report for DOD to develop a DOD-wide oversight framework and formalize its oversight role for the PFM programs, the department indicated that it is pursuing management information that includes personal finances to support its implementation of the President’s Management Agenda and to comply with the Government Performance Results Act. In summary, as mentioned earlier in my testimony, Congress and DOD have taken steps to decrease the likelihood that deployed and nondeployed servicemembers will experience financial problems. The prior increases in compensation, efforts to increase the financial literacy of servicemembers, and fuller utilization of the tools that DOD has provided for addressing the use of predatory lenders should positively affect the financial conditions of military personnel. While additional efforts are warranted to implement our recommendations on issues such as improving DOD’s oversight framework for assessing its PFM programs, some of these efforts to address the personal financial conditions of servicemembers and correct past programmatic shortcomings are well underway. Sustaining this momentum will be key to minimizing the adverse effects that personal financial management problems can have on the servicemember, unit, and service. Madam Chairwoman and Members of the Subcommittee, this concludes my prepared statement. I would be happy to respond to any questions you may have. For further information regarding this testimony, please contact me at 202- 512-6304 or [email protected]. Individuals making key contributions to this testimony include Jack E. Edwards, Assistant Director; Renee S. Brown; Marion A. Gatling; Cody Goebel; Barry Kirby; Marie A. Mak; Terry Richardson; and John Van Schaik. Financial Product Sales: Actions Needed to Protect Military Members. GAO-06-245T. Washington, D.C.: November 17, 2005. Financial Product Sales: Actions Needed to Better Protect Military Members. GAO-06-23. Washington, D.C.: November 2, 2005. Military Personnel: DOD Needs Better Controls over Supplemental Life Insurance Solicitation Policies Involving Servicemembers. GAO-05-696. Washington, D.C.: June 29, 2005. Military Personnel: DOD’s Comments on GAO’s Report on More DOD Actions Needed to Address Servicemembers’ Personal Financial Management Issues. GAO-05-638R. Washington D.C.: May 11, 2005. Military Personnel: More DOD Actions Needed to Address Servicemembers’ Personal Financial Management Issues. GAO-05-348. Washington, D.C.: April 26, 2005. Military Personnel: DOD Tools for Curbing the Use and Effects of Predatory Lending Not Fully Utilized. GAO-05-349. Washington, D.C.: April 26, 2005. Credit Reporting Literacy: Consumers Understood the Basics but Could Benefit from Targeted Educational Efforts. GAO-05-223. Washington, D.C.: March 16, 2005. DOD Systems Modernization: Management of Integrated Military Human Capital Program Needs Additional Improvements. GAO-05-189. Washington, D.C.: February 11, 2005. Highlights of a GAO Forum: The Federal Government’s Role in Improving Financial Literacy. GAO-05-93SP. Washington, D.C.: November 15, 2004. Military Personnel: DOD Needs More Data Before It Can Determine if Costly Changes to the Reserve Retirement System Are Warranted. GAO- 04-1005. Washington, D.C.: September 15, 2004. Military Pay: Army Reserve Soldiers Mobilized to Active Duty Experienced Significant Pay Problems. GAO-04-911. Washington, D. C.: August 20, 2004. Military Pay: Army Reserve Soldiers Mobilized to Active Duty Experienced Significant Pay Problems. GAO-04-990T. Washington, D.C.: July 20, 2004. Military Personnel: Survivor Benefits for Servicemembers and Federal, State, and City Government Employees. GAO-04-814. Washington, D.C.: July 15, 2004. Military Personnel: DOD Has Not Implemented the High Deployment Allowance That Could Compensate Servicemembers Deployed Frequently for Short Periods. GAO-04-805. Washington, D.C.: June 25, 2004. Military Personnel: Active Duty Compensation and Its Tax Treatment. GAO-04-721R. Washington, D.C.: May 7, 2004. Military Personnel: Observations Related to Reserve Compensation, Selective Reenlistment Bonuses, and Mail Delivery to Deployed Troops. GAO-04-582T. Washington, D.C.: March 24, 2004. Military Personnel: Bankruptcy Filings among Active Duty Service Members. GAO-04-465R. Washington, D.C.: February 27, 2004. Military Pay: Army National Guard Personnel Mobilized to Active Duty Experienced Significant Pay Problems. GAO-04-413T. Washington, D.C.: January 28, 2004. Military Personnel: DOD Needs More Effective Controls to Better Assess the Progress of the Selective Reenlistment Bonus Program. GAO-04-86. Washington, D.C.: November 13, 2003. Military Pay: Army National Guard Personnel Mobilized to Active Duty Experienced Significant Pay Problems. GAO-04-89. Washington, D.C.: November 13, 2003. Military Personnel: DFAS Has Not Met All Information Technology Requirements for Its New Pay System. GAO-04-149R. Washington, D.C.: October 20, 2003. Military Personnel: DOD Needs More Data to Address Financial and Health Care Issues Affecting Reservists. GAO-03-1004. Washington, D.C.: September 10, 2003. Military Personnel: DOD Needs to Assess Certain Factors in Determining Whether Hazardous Duty Pay Is Warranted for Duty in the Polar Regions. GAO-03-554. Washington, D.C.: April 29, 2003. Military Personnel: Management and Oversight of Selective Reenlistment Bonus Program Needs Improvement. GAO-03-149. Washington, D.C.: November 25, 2002. Military Personnel: Active Duty Benefits Reflect Changing Demographics, but Opportunities Exist to Improve. GAO-02-935. Washington, D.C.: September 18, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The finances of servicemembers and their families have been an ongoing concern of Congress and the Department of Defense (DOD), especially in light of more frequent deployments to support conflicts in Iraq and Afghanistan. Adverse effects that may result when servicemembers experience financial problems include loss of security clearances, criminal or nonjudicial sanctions, adverse personnel actions, or adverse impacts on unit readiness. To decrease the likelihood that servicemembers will experience financial problems, DOD has requested and Congress has granted annual increases in military basic pay for all active duty servicemembers and increases in special pays and allowances for deployed servicemembers. The military has also developed personal financial management (PFM) programs to help avoid or mitigate adverse effects associated with personal financial problems. However, studies published in 2002 showed that servicemembers continue to report financial problems. This testimony provides a summary of GAO's prior work examining (1) the extent to which deployments have affected the financial conditions of active duty servicemembers and their families, and (2) steps that DOD has taken to assist servicemembers with their financial needs. DOD data suggests that deployment status does not affect the financial condition of active duty servicemembers, although some deployed servicemembers faced certain problems. Data from a 2003 DOD-wide survey suggests that servicemembers who were deployed for at least 30 days reported similar levels of financial health or problems as those who had not deployed. For example, of junior enlisted personnel, 3 percent of the deployed group and 2 percent of the nondeployed group indicated that they were in "over their heads" financially; and 13 percent of the deployed group and 15 percent of the nondeployed group responded that they found it "tough to make ends meet but keeping your head above water" financially. However, problems receiving family separation allowance and communicating with creditors may result in financial difficulties for some deployed servicemembers. Based on DOD pay data for January 2005, almost 6,000 of 71,000 deployed servicemembers who had dependents did not obtain their family separation allowance in a timely manner. Furthermore, problems communicating with creditors--caused by limited Internet access, few telephones and high fees, and delays in receiving ground mail--can affect deployed servicemembers' abilities to resolve financial issues. Additionally, some financial products marketed to servicemembers may negatively affect their financial condition. DOD has taken a number of steps to assist servicemembers with their financial needs, although some of this assistance has been underutilized. These steps include PFM training for servicemembers, which is required by all four military services. DOD also provides free legal assistance on purchase contracts for large items and other financial documents. However, according to the attorneys and other personnel, servicemembers do not make full use of available legal services because they may not take the time to visit the attorney's office or they fear information about a financial problem would get back to the command and limit their career progression. In addition, each service has a relief or aid society designed to provide financial assistance through counseling and education as well as financial relief through grants or no-interest loans. Some servicemembers in our focus groups stated that they would not use relief from a service society because they take too long, are intrusive, require too much in-depth financial information, or may be career limiting if the command found out. Servicemembers may use non-DOD resources if they do not want the command to be aware of their financial conditions or they need products or support not offered through DOD, the services, or the installation. Although DOD has taken these steps to assist servicemembers with their financial needs, it does not have the results-oriented departmentwide data needed to assess the effectiveness of its PFM programs and provide necessary oversight. Without an oversight framework requiring evaluation and a reporting relationship between DOD and the services, DOD and Congress do not have the visibility or oversight needed to assess the effectiveness of DOD's financial management training and assistance to servicemembers.
You are an expert at summarizing long articles. Proceed to summarize the following text: Advances in information technology and the explosion in computer interconnectivity have had far-reaching effects, including the transformation from a paper-based to an electronic business environment and the capability for rapid communication through e-mail. Although these developments have led to improvements in speed and productivity, they also pose challenges, including the need to manage those e-mail messages that may be federal records. Under the Federal Records Act, NARA is given general oversight responsibilities for records management as well as general responsibilities for archiving. This includes the preservation in the National Archives of the United States of permanent records documenting the activities of the government. NARA thus oversees agency management of temporary and permanent records used in everyday operations and ultimately takes control of permanent agency records judged to be of historic value. (Of the total number of federal records, less than 3 percent are designated permanent.) In particular, NARA is responsible for issuing records management guidance; working with agencies to implement effective controls over the creation, maintenance, and use of records in the conduct of agency business; providing oversight of agencies’ records management programs; approving the disposition (destruction or preservation) of records, and providing storage facilities for agency records. The act also gives NARA the responsibility for conducting inspections or surveys of agency records and records management programs. The act requires each federal agency to make and preserve records that (1) document the organization, functions, policies, decisions, procedures, and essential transactions of the agency and (2) provide the information necessary to protect the legal and financial rights of the government and of persons directly affected by the agency’s activities. These records, which include e-mail records, must be effectively managed. As used in this chapter, “records” includes all books, papers, maps, photographs, machine readable materials, or other documentary materials, regardless of physical form or characteristics, made or received by an agency of the United States Government under Federal law or in connection with the transaction of public business and preserved or appropriate for preservation by that agency or its legitimate successor as evidence of the organization, functions, policies, decisions, procedures, operations, or other activities of the Government or because of the informational value of data in them. Library and museum material made or acquired and preserved solely for reference or exhibition purposes, extra copies of documents preserved only for convenience of reference, and stocks of publications and of processed documents are not included. As the definition shows, although government documentary materials (including e-mails) may be “records” in this sense, many are not. For example, not all e-mails document government “organization, functions, policies, decisions, procedures, operations, or other activities” or contain data of informational value. According to NARA, the activities of an agency records management program include, briefly, the following identifying records and sources of records; developing a file plan for organizing records, including identifying the classes of records that the agency produces; developing records schedules—that is, proposing for each type of content where and how long records need to be retained and their final disposition (destruction or preservation) based on time, or event, or a combination of time and event; and providing records management guidance to agency staff, including agency-specific recordkeeping practices that establish what records need to be created in order to conduct agency business. Developing record schedules is a cornerstone of the records management process. Scheduling involves not individual documents or file folders, but rather broad categories of records. Traditionally, these were record series: that is, “records arranged according to a filing system or kept together because they relate to a particular subject or function, result from the same activity, document a specific kind of transaction, take a particular physical form, or have some other relationship arising out of their creation, receipt, or use, such as restrictions on access and use.” More recently, NARA introduced flexible scheduling, which allows so-called “big bucket” or large aggregation schedules for temporary and permanent records. Under this approach, the schedule applies not necessarily to records series, but to all records relating to a work process, group of work processes, or a broad program area to which the same retention time would be applied. To develop records schedules, agencies identify and inventory records, and NARA’s appraisal archivists work with agencies to appraise their value (which includes informational, evidential, and historical value), determine whether they are temporary or permanent, and determine how long the temporary records should be kept. NARA then approves the necessary records schedules. No record may be destroyed unless it has been scheduled, and for temporary records the schedule is of critical importance because it provides the authority to dispose of the record after a specified time period. Records schedules may be of two kinds: an agency-specific schedule or a general records schedule, which covers records common to several or all agencies. According to NARA, general records schedules cover about a third of all federal records. For the other two-thirds, NARA and the agencies must agree upon specific records schedules. Once a schedule has been approved, the agency is to issue it as a management directive, train employees in its use, apply its provisions to temporary and permanent records, and ensure proper implementation. The Federal Records Act covers documentary material regardless of physical form or media, but until the advent of computers, records management and archiving had been largely focused on handling paper documents. As information is increasingly created and stored electronically, records management has had to take into account the creation of records in varieties of electronic formats, including e-mail messages. NARA has promulgated regulations at 36 C.F.R. Part 1234 that provide guidance to agencies about the management of electronic records. This guidance is supplemented by the issuance of periodic NARA bulletins and other forms of guidance to agencies. To ensure that the management of agency electronic records is consistent with the Federal Records Act, NARA requires each agency to maintain an inventory of all agency information systems that identifies basic facts about each system and the information it contains, and it requires that agencies schedule the electronic records in its systems. Like other records, electronic records must be scheduled either under agency-specific schedules or pursuant to a general records schedule. According to the regulation, agencies are required to establish policies and procedures that provide for appropriate retention and disposition of electronic records. In addition to including general provisions on electronic records, agency procedures must specifically address e-mail records: that is, the creation, maintenance and use, and disposition of federal records created by individuals using electronic mail systems. “a document created or received on an electronic mail system including brief notes, more formal or substantive narrative documents, and any attachments, such as word processing and other electronic documents, which may be transmitted with the message.” The regulation requires e-mail records to be managed as are other potential federal records with regard to adequacy of documentation, recordkeeping requirements, agency records management responsibilities, and records disposition. This entails, in particular, ensuring that staff are aware that e-mails are potential records and training them in identifying which e-mails are records. Specific requirements for e-mail records include, for example, that for each e-mail record, agencies must preserve transmission data, including names of sender and addressees and message date, because these provide context that may be needed for the message to be understood. Further, except for a limited category of “transitory” e-mail records, agencies are not permitted to store the recordkeeping copy of e-mail records in the e- mail system, unless that system has all the features of a recordkeeping system; table 1 lists these required features. If agency e-mail systems do not have the required recordkeeping features, either agencies must copy e-mail records to a separate electronic recordkeeping system, or they must print e-mail messages (including associated transmission information that is needed for purposes of context) and file the copies in traditional paper recordkeeping files. NARA’s guidance allows agencies to use either paper or electronic recordkeeping systems for record copies of e-mail messages, depending on the agencies’ business needs. Each of the required features listed in table 1 is important because it helps ensure that e-mail records remain both accessible and usable during their useful lives. For example, it is essential to be able to classify records according to their business purpose so that they can be retrieved in case of mission need. Further, if records cannot be retrieved easily and quickly, or they are not retained in a usable format, they do not serve the mission or historical purpose that led to their being preserved. In many cases, e-mail systems do not have the features in the table. If e-mail records are retained in such systems and not in recordkeeping systems, they may be harder to find and use, as well as being at increased risk of loss from inadvertent or automatic deletion. Agencies must also have procedures that specifically address the destruction of e-mail records. In particular, e-mail records may not be deleted or otherwise disposed of without prior authority from NARA. (Recall that not all e-mail is record material. Agencies may destroy nonrecord e-mail.) Agencies can dispose of e-mail records in three situations: First, agencies are authorized to dispose of e-mail records with very short-term (transitory) value that are stored in e-mail systems at the end of their retention periods (as mentioned earlier). Second, for other records in e- mail systems, NARA authorizes agencies to delete the version in the e-mail system after the record has been preserved in a recordkeeping system along with all appropriate transmission data. Finally, agencies are authorized to dispose of e-mail records in the recordkeeping system in accordance with the appropriate records schedule. If the records in the recordkeeping system are not scheduled, the agency must schedule them before they can be disposed of. Because of its nature, e-mail can present particular challenges to records management. First, the information contained in e-mail records is not uniform. This is in contrast to many information systems, particularly those in computer centers engaged in large-scale data processing, which contain structured data that generally can be categorized into a relatively limited set of logical groupings. The information in e-mail systems, on the other hand, is not structured in this way: it may concern any subject or function and document various types of transactions. As a result, in many cases, decisions on which e-mail messages are records must be made individually. The kinds of considerations that may go into determining the record status of an e-mail message are illustrated in figure 1. As shown by the decision tree in the figure (developed at Sandia National Laboratories), agency staff have to be aware of the defining features of a record in order to make these decisions. Second, the transmission data associated with an e-mail record—including information about the senders and receivers of messages, the date the message was sent, and any attachments to the messages—provide context that may be crucial to understanding the message. Thus, as NARA’s e-mail regulations and guidance reflect, transmission data must be retained, and attachments are defined as part of the e-mail record. Third, a given message may be part of an exchange of messages between two or more people within or outside an agency, or even of a string (sometimes branching) of many messages sent and received on a given topic. In such cases, agency staff need to decide which message or messages should be considered records and who is responsible for storing them in a recordkeeping system. Finally, the large number of federal e-mail users and high volume of e- mails increase the management challenge. According to NARA, the use of e-mail results in more records being created than in the past, as it often replaces phone conversations and face-to-face meetings that might not have been otherwise recorded. E-mail may also replace other types of written communications, such as letters and memorandums. Whether agencies use paper-based or electronic recordkeeping systems, individual users generally make decisions (based on considerations such as those in the figure) on what messages they judge to be records. In paper-based systems, users then print and file e-mail records—with appropriate transmission data—in the appropriate file structure (generally corresponding to record series or schedule). In electronic systems, the particular steps to file the record would vary depending on the particular type of system and its degree of integration with the agency’s other information systems. Although details vary, an electronic recordkeeping system, like a paper-based system, requires that a filing structure has been established by which records can be associated with the appropriate series. The advantages of using a paper-based system for record copies of e-mails are that this approach takes advantage of the recordkeeping system already in place for the agency’s paper files and requires little or no technological investment. The disadvantages are that a paper-based approach depends on manual processes and requires electronic material to be converted to paper, potentially losing some features of the electronic original; these processes may be especially burdensome if the volume of e- mail records is large. The advantage of using an electronic recordkeeping system, besides avoiding the need to manage paper, is that it can be designed to capture certain required data (such as e-mail transmission data) automatically. Electronic recordkeeping systems also make searches for records on particular topics much more efficient. In addition, electronic systems that are integrated with other applications may have features that make it easier for the user to identify records and that potentially could provide automatic or partially automatic classification functions. However, as with other information technology investments, acquiring an electronic recordkeeping system requires careful planning and analysis of agency requirements and business processes; in addition, electronic recordkeeping raises the issue of maintaining electronic information in an accessible form throughout its useful life. Finally, like paper-based systems, electronic recordkeeping systems must be used properly by employees to be effective. These challenges have been recognized by NARA and the records management community in numerous studies and articles. A 2001 survey of federal recordkeeping practices conducted by a contractor—SRA International—for NARA concluded, among other things, that managing e- mail was a major records management problem and that the quality of recordkeeping varied considerably across agencies. The authors also commented on features of agency missions that lead to strong recordkeeping practices: “When agencies have a strong business need for good recordkeeping, such as the threat of litigation or an agency mission that revolves around maintaining ‘case’ files, then recordkeeping practices tend to be relatively strong with regard to the records involved.” In addition, the study concluded that for many federal employees, the concept of a “record” and what should be scheduled and preserved was not clear. A 2005 survey of federal agencies’ policy and practices for electronic records management, funded in part by NARA, concluded that procedures for managing e-mail were underdeveloped. The study found that most of the surveyed offices had not developed electronic recordkeeping systems, but were instead maintaining recordkeeping copies of e-mail and other electronic documents in paper format. However, all of the offices also maintained electronic records (frequently electronic duplicates of paper records). According to the study team, agencies did not establish electronic recordkeeping systems partly because of a lack of support and resources, and the complexity of implementing such systems increased with the size of the agency. As a result, organizations were maintaining unsynchronized parallel paper and electronic systems, resulting in extra work, confusion regarding which is the recordkeeping copy, and retention of many records beyond their disposition date. The study team also concluded that disposition of electronic records was too cumbersome and uncertain. According to the report, employees delete electronic records, such as e-mails, one at a time, a cumbersome process which may result in retention of too many records for too long or premature disposition that is inconsistent with approved retention schedules. (This is in contrast to records disposition in a recordkeeping system, in which categories of temporary records may be disposed of at the end of their retention periods.) The report also discussed NARA’s role in promoting agencies’ adoption of electronic recordkeeping systems. Commenting on these points, NARA expressed the view that for agencies that maintain paper as the record copy, the early destruction of electronic copies was not a significant problem because such copies generally have very short term retentions, and no information is lost. It considered that the overly long retention of electronic copies did raise concerns regarding legal discovery and compliance with requests under the Freedom of Information Act or the Privacy Act. In these circumstances, agencies are required to search for all information, not just information in recordkeeping systems; thus, maintaining large volumes of nonrecord material increases this burden. Most recently, a NARA study team examined in 2007 the experiences of five federal agencies (including itself) with electronic records management applications, with a particular emphasis on how these organizations used these applications to manage e-mail. The purpose of the study was to gather information on the strategies that organizations are using that may be useful to others. Among the major conclusions from the survey was that implementing an electronic records management application requires considerable effort in planning, testing, and implementation, and that although the functionality of the software product itself is important, other factors are also crucial, such as agency culture and the quality of the records management system in place. With regard to e-mail in particular, the survey concluded that for some agencies, the volume of e-mail messages created and received may be too overwhelming to be managed at the desktop by thousands of employees across many sites using a records management application alone, and that e-mail messages can constitute the most voluminous type of record that is filed into these applications. Finally, further study was recommended of technologies that are being used to manage e-mail and what federal agencies are doing with their record e-mail messages. NARA is planning to perform such a study in 2008. According to NARA, the study will take a close look at how selected agencies are implementing electronic recordkeeping for their program records, including those e-mail messages that need to be retained and managed as federal records. The study will look at electronic recordkeeping projects that have a records management application in place as well as other solutions that provide recordkeeping functionality. In both cases, NARA plans to explore how e- mail messages in particular are identified and managed as records. According to NARA officials, they have begun planning for the study and identifying agencies to be included; they expect to have the report completed by the end of September 2008. Such a study could provide useful information to help NARA develop additional guidance to agencies looking for electronic solutions for records management of e-mail and other electronic records. As the earlier studies suggest, implementing such solutions is not a simple or easy process. Although NARA has referred to the decision to move to electronic recordkeeping as inevitable, it emphasizes that the timing of the decision depends on an agency’s specific mission and circumstances. For the last several years, NARA’s records management program has increasingly reflected the importance of electronic records and recordkeeping. For example, NARA has undertaken a redesign of its records management activities, including (among other things) the following three activities, which are significant for management of electronic records, including e-mail: NARA established flexible scheduling (the so-called “big bucket” approach described earlier), under which agencies can schedule records at any level of aggregation that meets their business needs. By simplifying disposition instructions, “big bucket” schedules have advantages for electronic records management; filing e-mail records under a “big bucket” system, for example, is simplified because users can be presented with fewer filing categories. NARA developed e-mail regulations that eliminated the previous requirement to file transitory e-mail dealing with routine matters in a formal agency recordkeeping system. According to NARA, this change would allow agencies to focus their resources on managing e-mail that is important for long-term documentation of agency business. The change was reflected in a revision to General Records Schedule 23 that explicitly included very short-term temporary e-mail messages. The final rule became effective on March 23, 2006. NARA developed regulations and guidance to make retention schedules media neutral. According to NARA, its objective was to eliminate routine rescheduling work so that agencies and NARA could focus their resources on high records management priorities. Under its revised regulations, in effect as of December 2007, new records schedules would be media neutral unless otherwise specified. At the same time, NARA revised General Records Schedule 20 (which provides disposition authorities for electronic records) to expand agencies’ authority to apply previously approved schedules to electronic records and to dispose of hard copy records that have been converted to an electronic format, among other things. In July 1999, we reported that NARA and federal agencies were facing the substantial challenge of managing and preserving electronic records in an era of rapidly changing technology. In that report, we stated that in addition to handling the burgeoning volume of electronic records, NARA and the agencies would have to address several hardware and software issues to ensure that electronic records were properly created, maintained, secured, and retrievable in the future. We also noted that NARA did not have governmentwide data on the records management capabilities and programs of all federal agencies. As a result, we recommended that NARA conduct a governmentwide survey of agencies’ electronic records management programs and use the information as input to its efforts to reengineer its business processes. NARA subsequently undertook efforts to assess governmentwide records management practices and study the redesign of its business processes. As mentioned earlier, in 2001 NARA completed an assessment of governmentwide records management practices, as we had recommended. NARA’s assessment of the federal recordkeeping environment concluded that although agencies were creating and maintaining records appropriately, most electronic records remained unscheduled, and records of historical value were not being identified and provided to NARA for archiving. In 2002, we reported that factors contributing to the problems of managing and preserving electronic records included records management guidance that was inadequate in the current technological environment, the low priority often given to records management programs, and the lack of technology tools to manage electronic records. In addition, NARA did not perform systematic inspections of agency records management, so that it did not have comprehensive information on implementation issues and areas where guidance needed strengthening. Although NARA had plans to improve its guidance and address technology issues, these did not address the low priority generally given to records management programs nor the inspection issue. With regard to inspections, we noted that in 2000, NARA had replaced agency evaluations (inspections) with a new approach—targeted assistance—because it considered that its previous approach to evaluations had been flawed: it reached only a few agencies, it was often perceived negatively, and it resulted in a list of records management problems that agencies then had to resolve on their own. Under targeted assistance, NARA entered into partnerships with federal agencies to provide them with guidance, assistance, or training in any area of records management. Despite the possible benefits of such assistance to the targeted agencies, however, we concluded that it was not a substitute for systematic inspections. Only agencies requesting assistance were evaluated, and the scope and focus of the assistance were determined not by NARA but by the requesting agency. Thus, it did not provide systematic and comprehensive information for assessing progress over time. To address the low priority generally given to records management programs, we recommended that NARA develop a strategy for raising agency senior management awareness of and commitment to records management. To address the inspection issue, we recommended that NARA develop a strategy for conducting systematic inspections of agency records management programs to (1) periodically assess agency progress in improving records management programs and (2) evaluate the efficacy of NARA’s governmentwide guidance. In response to our recommendations, NARA devised a strategy for raising awareness among senior agency management of the importance of good federal records management, as well as a comprehensive approach to improving agency records management that included inspections and identification of risks and priorities. NARA also took steps to improve federal records management programs by updating its guidance to reflect new types of electronic records. In 2003, we testified that the plan for improving agency records management did not include provisions for using inspections to evaluate the efficacy of its governmentwide guidance, and an implementation plan for the approach had not yet been established. NARA later addressed these shortcomings by developing an implementation plan that included using agency inspections to evaluate the efficacy of its guidance, with such inspections to be undertaken based on a risk-based model, government studies, or media reports. Such an approach, if appropriately implemented, had the potential to help avoid the weaknesses in records management programs that led to the scheduling and disposition problems that we and NARA had described in earlier work. To fulfill its responsibility under the Federal Records Act for oversight of agency records management programs, NARA planned to conduct activities including inspections, studies, and reporting. However, despite NARA’s plans, in recent years its oversight activities have been primarily limited to performing studies. Although it has performed or sponsored six records management studies since 2003, it has not conducted any inspections since 2000. In addition, although NARA’s reporting to the Congress and OMB has generally described progress in improving records management at individual agencies and provided an overview of some of its major records management activities, it has not consistently provided evaluations of responses by federal agencies to its recommendations, as required, or details on records management problems or recommended practices that were discovered as a result of inspections, studies, or targeted assistance projects. Without a consistent oversight program that provides it with a governmentwide perspective, NARA has limited assurance that agencies are appropriately managing the records in their custody, thus increasing the risk that important records will be lost. Oversight is a key activity in governance that addresses whether organizations are carrying out their responsibilities and serves to detect other shortcomings. Our reports emphasize the importance of effective oversight of government operations by individual agency management, by agencies having governmentwide oversight responsibilities, and by the Congress. Various functions and activities may be part of oversight, including monitoring, evaluating, and reporting on the performance of organizations and their management and holding them accountable for results. The Federal Records Act gave NARA responsibility for oversight of agency records management programs by, among other functions, making it responsible for conducting inspections or surveys of agencies’ records and records management programs and practices; conducting records management studies; and reporting the results of these activities to the Congress and OMB. In particular, the reports are to include evaluations of responses by agencies to any recommendations resulting from inspections or studies that NARA conducts and, to the extent practicable, estimates of costs to the government if agencies do not implement such recommendations. According to NARA, it planned to carry out its oversight responsibilities using inspections, studies, and reporting. Specifically, in 2003, NARA stated that it would perform inspections of agency records and records management conduct studies that focus on cross-government issues, analyze and identify best practices, and use the results to develop governmentwide recommendations and guidance; and report to the Congress and OMB on problems and recommended practices discovered as part of inspections, studies, and targeted assistance projects. Although inspections were included in NARA’s oversight plans in 2003, NARA has not conducted any since 2000. NARA laid out a strategy for performing inspections and studies in 2003 as part of its records management redesign efforts. According to this strategy, NARA anticipated undertaking inspections only under what it termed exceptional circumstances: that is, if (1) agencies have high-level records management problems that put at risk federal records that protect rights, assure accountability, or document the national experience, and (2) agencies refuse targeted assistance from NARA and fail to mitigate or otherwise effectively deal with such risks. In other words, NARA considered inspections its tool of last resort: to be used when the risk to records was deemed high and other tools (such as targeted assistance and training) failed to mitigate the risk to records. Under this strategy, NARA planned to determine when to undertake inspections based on its risk-based resource allocation model (or when it learned through other means of a clear and egregious records management problem in an agency or line of business). Using this model, developed in 2003, NARA’s Resource Allocation Project performed a governmentwide assessment in 2004 of high-priority federal records and records programs. After reviewing program areas and work processes of the government (as opposed to organizational units), the project identified the business processes, subfunctions, and agency activities that were likely to generate the majority of high-priority records. Based on input and assessments from NARA staff with expertise in the subfunctions and associated agencies, the project then rated the subfunctions according to three criteria for establishing resource priorities: the risk to records (based on such factors as whether the subfunctions or associated agencies had experienced major scheduling issues or known problems, such as allegations of unauthorized destruction of records), the level of significance of the records to rights and accountability, and the likelihood that the subfunction would generate permanent records (and if so, their volume and significance). According to the final report on the project, this assessment showed that the risks to records were being addressed and managed by the Archives’ own records management activities and those of the agencies. As a result, the Resource Allocation Project did not lead to the identification of records management risks that met the new inspection criteria. Instead, NARA applied its resources to other activities that it considered more effective and less resource-intensive than the inspections it undertook in the past. These include regular contacts between appraisal archivists and agencies, updated guidance information, and training. However, the Resource Allocation Project was primarily based on NARA’s in-house information sources and expertise. Although this information and expertise may be considerable and collecting and assessing it potentially valuable, it is not a substitute for examinations of agency programs, surveys of practices, agency self-assessments, or other external sources of information. Further, although the final report on the 2004 project included important lessons learned for improving future assessments, NARA did not set up a process for continuing the effort and applying the lessons learned to updating the assessment or validating its results. Officials had also stated that targeted assistance was a tool that NARA would use in preference to inspections to solve urgent records management problems and that the results of the Resource Allocation Project were also to be used in determining where to use this tool. However, NARA’s use of targeted assistance has declined significantly over the past 5 years. (NARA reported that in 2002, 77 projects were opened and 76 completed; in contrast, 4 were opened and none completed in 2007.) Officials ascribed the reduced emphasis on targeted assistance projects to various factors, including competing demands (such as work on the development of its advanced electronic records archive and on helping agencies to schedule electronic records), the difficulty of getting agencies to devote resources to the projects, and the removal of numerical targets for targeted assistance projects, which occurred when NARA revised performance metrics to emphasize results rather than quantity. According to NARA, it also works with agencies to address critical records management issues outside formal targeted assistance arrangements. In addition, it identifies and investigates allegations of unauthorized destruction of federal records. Thus, neither inspections nor targeted assistance have made significant contributions to NARA’s oversight of agency records management. Without a more comprehensive method of evaluating agency records management programs, NARA lacks assurance that agencies are effectively managing records throughout their life cycle. NARA has performed records management studies in accordance with its 2003 plan. According to the plan, it was to conduct records management studies to focus on cross-government issues, to identify and analyze best practices, and to develop governmentwide recommendations and guidance. In addition, NARA planned to undertake records management studies when it believed an agency or agencies in a specific line of business were using records management practices that could benefit the rest of a specific line of business or the federal government as a whole. Since developing its 2003 plan, NARA has conducted or sponsored six records management studies (see table 2). Most of these studies were focused on records management issues with wide application. For example, two were related to helping NARA improve its guidance on particular types of records—health and safety records, and research and development (R&D) records. Another two were limited in scope to components of a single agency, but they addressed issues with potentially broad application and included conclusions regarding factors that needed to be considered in the appraisal of given types of records. Under the Federal Records Act, NARA is responsible for reporting the results of its records management activities to the Congress and OMB, including evaluations of responses by agencies to any recommendations resulting from its inspections or studies and (where practicable) estimates of costs if its recommendations are not implemented. Further, NARA’s plan for carrying out its oversight responsibilities states that it will report to the Congress and OMB on problems and recommended practices discovered as part of inspections, studies, and targeted assistance projects. According to NARA, it fulfills its statutory reporting requirement through annual Performance and Accountability Reports, which include sections on “Federal Records Management Evaluations.” However, although NARA has issued reports on its records management studies, the Federal Records Management Evaluations sections of the Performance and Accountability Reports have not included the studies’ results or evaluations of responses by agencies to its recommendations. Instead, the reports have generally provided an overview of NARA’s major records management activities, as well as describing noteworthy records management progress at individual agencies. For example, the report for fiscal year 2007 provided statistics on the appraisal and scheduling of electronic records systems and listed agencies that had scheduled electronic records or transferred permanent electronic records to NARA during the fiscal year. Elsewhere in the reports, NARA mentioned four of the six records management studies as part of its reporting on records management goals. However, it included few details on the results of these studies regarding the records management problems or recommended practices that they uncovered. For example, in the fiscal year 2005 Performance and Accountability Report, NARA reported that it had completed a January 2005 study on Air Force Headquarters offices (see table 2), but NARA did not discuss the results, and later reports did not discuss actions taken in response to its recommendations. Similarly, the fiscal year 2007 Performance and Accountability Report did not describe any actions that the Department of Energy had taken in response to an August 2006 study. Also, in 2007, NARA stopped reporting on its targeted assistance projects. In prior years, its Performance and Accountability Reports generally provided statistics on targeted assistance projects and described their general goals, although the reports did not generally discuss problems or recommended practices resulting from them. In the fiscal year 2007 report, NARA stated that the strategies described in its Strategic Directions, including targeted assistance, had become part of its standard business practices and would no longer be highlighted individually. However, as mentioned earlier, the number of targeted assistance projects had declined significantly by that time. The Director and senior officials from NARA’s Modern Records Program agreed that the annual reports did not specify the problems and recommended practices discovered as part of inspections, studies, and targeted assistance projects. According to these officials, the annual Performance and Accountability Reports have been focused on positive news, and NARA has struggled with developing an objective way to report negative news about agencies’ records management. The officials attributed this difficulty to the agency’s conservatism in this regard. NARA’s limited use of oversight tools and incomplete reporting on the specific results of its oversight activities can be attributed to an organizational preference for using persuasion and cooperation when working with agencies. This preferred approach is consistent with NARA’s reasons (as we noted in 2003) for replacing agency evaluations (inspections) with targeted assistance: among these reasons was that inspections were perceived negatively by agencies. NARA officials have said that they prefer to use “carrots, rather than sticks.” NARA officials added that full-scale inspections were resource intensive and took several years to complete, and that agencies took years to address NARA’s recommendations. Although, as described earlier, NARA regularly works with agencies on scheduling and disposition of records (activities related to the end of the records life cycle), officials agreed that these activities provide limited insight into records management at earlier stages—that is, creation, maintenance, and use. The officials also agreed that their work with agencies on scheduling records does not fulfill the Archivist’s responsibility under the Federal Records Act to conduct inspections or surveys of agency records and records management programs and practices. Further, by giving the Archivist the responsibility to report to the Congress and OMB on records management issues, the Federal Records Act provides NARA with a tool for holding agencies accountable, a key aspect of oversight. However, NARA has been reluctant to use this tool, limiting its ability to determine whether federal agencies are carrying out their records management responsibilities. Without more specific and comprehensive information about how agencies are managing their records and without the means to hold agencies accountable for shortcomings, NARA’s ability to identify and address common records management problems is impaired. As a result, there is reduced assurance that records are adequately managed and that important records are not being lost. The four agencies reviewed—the Department of Homeland Security (DHS); the Environmental Protection Agency (EPA); the Federal Trade Commission (FTC); and the Department of Housing and Urban Development (HUD)—generally preserved e-mail records through paper- based processes, although one agency—EPA—is in the process of deploying an electronic content management system that is to be used for managing e-mail messages that are agency records; two others have long- term plans to develop electronic recordkeeping. Three of the four agencies also used electronic systems to manage documents, correspondence, and so on, but these systems generally did not have recordkeeping features. Each of the business units that we reviewed (one at each agency) maintained “case” files to fulfill its mission that were used for recordkeeping. The practice at the units was to include e-mail printouts in the case files if they contained information necessary to document the case—that is, record material. These printouts included transmission data and distribution lists, as required. DHS: DHS primarily uses “print and file” recordkeeping for all records. None of the department’s e-mail systems is a recordkeeping system; accordingly, they may be used to store only transitory e-mail records. Officials from the Office of the DHS Chief Information Officer (CIO) told us that DHS e-mail systems house transitory e-mails and retain them for at least 90 days. In addition, according to the CIO office, although employees can currently access Web-based and Internet-accessible private e-mail systems, the department is taking steps to restrict or remove this access. Although its current recordkeeping is generally paper-based, DHS has begun planning for an enterprisewide Electronic Records Management System. According to the business case submitted by DHS to OMB to justify the proposed investment, the proposed system is to allow electronic storage and retrieval of records by authorized staff throughout DHS and permit the elimination of paper file copies. According to the department’s senior records officer, DHS’s current records schedules are now media neutral. DHS’s records management handbook also provides instructions for both electronic and paper e-mail recordkeeping. In addition, DHS CIO officials told us that the department has implemented several electronic knowledge and document management systems, at least two of which have recordkeeping features but are not used for e-mail recordkeeping. E-mail records were maintained in paper at the DHS business unit reviewed, the Washington Regional Office of Detention and Removal Operations under Immigration and Customs Enforcement (ICE). The primary responsibility of the Office of Detention and Removal Operations is to identify, apprehend, and remove illegal aliens from the United States. To fulfill its mission, the business unit maintained paper-based case files, and these files were used for recordkeeping. To store deportation case information, the unit uses the so-called “alien files” or “A-files.” These files are created by DHS’s Citizenship and Immigration Services for certain noncitizens, such as immigrants, to serve as the one central file for all of the noncitizen’s immigration-related applications and related documents that pertain to that person’s activities. The A-files are managed by Citizenship and Immigration Services and shared among DHS components as necessary. Because A- files are paper-based, they require physical transfer from one location to another. To track these files, DHS uses the National File Tracking System, an automated file-tracking system developed to enable all DHS staff at numerous DHS locations around the country to locate, request, receive, and transfer A-files. Each A-file has a National File Tracking System number. According to business unit officials, e-mails would not usually be found in the A-files because the primary use of e-mail was to share information within the business unit, and so it would rarely rise to the level of a record. The A-files mainly contain other kinds of information, including forms from agency information systems, investigation results, charging documents, conviction documents, photos, fingerprints, and memos. A deportation officer provided 10 active open case files for inspection (each officer is usually responsible for 40 to 60 active open immigration cases). The 10 case files contained a total of 18 e-mail records, which included transmittal data and distribution lists. EPA: EPA’s current recordkeeping is largely print and file, but the agency is undergoing a transition to electronic recordkeeping, beginning with e- mail records. According to EPA officials, the commitment to establish its Enterprise Content Management System (ECMS), which has recordkeeping features, was a result of an agency decision to develop a long-term solution to manage hurricane records electronically in the wake of Hurricanes Katrina and Rita. According to a memorandum sent to all EPA employees, the goal was to ensure that these records be placed in a recordkeeping system that met both EPA and NARA requirements, while allowing easy access to the records when needed. At the same time, the agency ordered that the automatic delete function in the agency’s e-mail system be deactivated so that no hurricane records could be deleted accidentally. According to agency officials, the e-mail capability of ECMS was available in fiscal year 2007, and the agency expects that by the end of fiscal year 2009, 50 percent of EPA staff and contractors will be using the system. The ECMS repository is an electronic recordkeeping system that uses commercial software that complies with a standard endorsed by NARA. According to officials, as part of its preparations for the transition, EPA recently updated its record schedules so that its treatment of records would be media neutral; this is to facilitate uploading records into ECMS. It has also developed materials, such as a brochure and a user guide, to support its transition. The agency’s e-mail systems are not currently used as recordkeeping systems and will not be under ECMS. Accordingly, they can be used to store only transitory e-mail records. Officials also told us that employees could access Web-based e-mail systems for limited personal use, but that they were not permitted to use these for official business. E-mail records were maintained in paper at the EPA business unit reviewed, the Assessment and Remediation Division of the Office of Superfund Remediation and Technology Innovation (part of EPA’s Office of Solid Waste and Emergency Response). Among other things, this division processes claims related to Superfund cleanup settlements. Officials from the Office of Superfund Remediation and Technology Innovation told us that recordkeeping for this office was print and file, but that employees were also directed to include all records (including e-mail records) into the office’s electronic Superfund Document Management System. This was not a recordkeeping system, but the plan was to integrate it with ECMS for long-term stewardship of Superfund files. According to these officials, they expect to be able to capture Superfund e- mail records in ECMS by fall 2008. Officials of the Assessment and Remediation Division stated that few e- mail messages would be considered records, because most official business regarding claims was conducted through correspondence on letterhead with an original signature. Although copies of these might be sent as e-mail attachments, these officials said, they would not be the official recordkeeping copy. However, division officials stated that e-mail records were more likely to be included in case files regarding “mixed funding” claims related to Superfund cleanup settlements, because these involved communication between regional offices and parties involved in the claims. (Mixed funding refers to the government assuming some proportion of cleanup expenses, with other parties assuming the rest.) According to officials, mixed funding documentation could include e-mail records documenting information to justify claims and facilitate payment. Officials provided a mixed funding case file for inspection, in which they had identified 10 e-mail records. All these records included transmission data and distribution lists, as required. FTC: FTC recordkeeping for e-mail and other records is print and file. The commission’s e-mail system is not a recordkeeping system, and the commission has not implemented the option allowed by NARA’s guidance to use the e-mail system for storing transitory e-mail records. The agency has no current plans to institute electronic recordkeeping. According to FTC officials, the commission’s processes are largely paper based. The commission’s records management guidance states that few e-mails are expected to rise to the level of a record. For example, agency officials explained that official decisions of the commission are generally reached jointly by the commissioners and recorded in documents such as memorandums, letters, and meeting minutes. According to officials, FTC uses a case management system to track work products (such as depositions, filings, and briefs), but this is not a document management or recordkeeping system. According to officials, about 80 percent of all FTC files are case files. The records manager said that the records schedules for FTC programs currently include instructions for e-mail disposition, but that the office is in the process of conducting a records inventory and reassessing records scheduling, with the next step being to do “big bucket” media-neutral scheduling. According to this official, this approach will provide flexibility in the event that FTC adopts electronic business processes in the future. According to FTC officials, the commission is currently assessing its needs for electronic document management tools, including an electronic recordkeeping system. The CIO told us that agency staff cannot directly access external Web- based e-mail through the agency’s Web browsers, and agency employees have been instructed not to use such systems for official FTC business. However, this official said that agency employees may use the commission’s remote application delivery environment to obtain limited access to external Web-based e-mail as a convenience. The business unit reviewed at FTC was the Division of Marketing Practices within the Consumer Protection Bureau, which responds to problems of consumer fraud in the marketplace, such as deceptive marketing schemes that use false and misleading information. The division enforces federal consumer protection laws by, among other things, developing rules to protect consumers and filing actions in federal district court for immediate and permanent orders to stop scams and get compensation for scam victims. The business unit follows the FTC’s print and file approach to recordkeeping, saving e-mails and other communications if they are related to a case. At this unit, cases are investigations of Internet fraud and marketing practices, each of which is assigned to a lead attorney. Officials provided one closed case file for inspection, consisting of four boxes of records. The case file provided contained about 65 e-mails, all of which included transmittal data and distribution lists. HUD: HUD currently uses a print and file approach to e-mail recordkeeping. The department’s e-mail system is not a recordkeeping system, and according to officials, they have not implemented the option allowed by NARA’s guidance to use the e-mail system for storing transitory e-mail records. However, as part of an overall modernization plan, HUD is undertaking an enterprise office system modernization project for its records and document management. According to the business case submitted by HUD to OMB to justify the modernization investment, the HUD Electronic Record System (HERS) will replace eight legacy systems and support the full life cycle of document management activities and correspondence management, including the creation and processing of records, record disposition, and retrieval of historical archived information. HUD plans to implement HERS by the fourth quarter of 2010. In the first phase of the plan, HUD is implementing modernized systems for tracking correspondence and Freedom of Information Act requests. Although the correspondence system is used for tracking e-mail correspondence, it is not a recordkeeping system for e-mail. The business unit reviewed at HUD was the Office of Healthy Homes and Lead Hazard Control. Among other things, this office manages grants related to lead hazard and conducts investigations to determine compliance with HUD’s Lead Disclosure Rule. HUD records management officials stated that each program area has a file plan, and that the Office of Healthy Homes and Lead Hazard Control has its own records schedule. According to officials from the office, most of their business is transacted via certified mail, so that relatively few e-mail messages would be record material. Two units provided active open files for inspection: nine grant files from six Government Technical Representatives in the Program Management and Assurance Division, and four lead hazard investigation case files from one inspector in the Compliance Assistance and Enforcement Division. The nine grant files included 120 e-mail messages, and the four investigation files included 5 e-mail messages, all in the same case file. All 125 of the e-mail records included transmittal data and distribution lists, as required. At three of the four agencies reviewed, the policies in place generally addressed the requirements for e-mail records management that we identified, but each was missing one of the nine requirements. At the fourth agency (HUD), the policies in place did not cover three of eight applicable requirements. According to NARA’s regulations on records management, agencies are required to establish policies and procedures that provide for appropriate retention and disposition of electronic records. In addition to including general provisions on electronic records, agency procedures must address specific requirements for e-mail records. The regulations provide minimum requirements, which allow agencies flexibility to establish processes for managing e-mail records that are appropriate to their business, size, and resources. According to the regulations, certain aspects of e-mail must be addressed in the instructions that agencies provide staff on identifying and preserving electronic mail messages, such as the need to preserve transmission data. Agencies are also required to address the use of external e-mail systems that are not controlled by the agency (such as private e-mail accounts on commercial systems such as Gmail, Hotmail, .Mac, etc.). Where agency staff have access to external systems, agencies must ensure that federal records sent or received on such systems are preserved in the appropriate recordkeeping system and that reasonable steps are taken to capture available transmission and receipt data needed by the agency for recordkeeping purposes. One of the four agencies (HUD) had its systems configured so that staff could not access external e- mail applications; thus, this requirement was not applicable for HUD. In summary, we extracted nine key requirements from the regulation. Agency records management policy and guidance with regard to e-mail must address these requirements, which are shown in table 3. The policies and guidance at three of the four agencies (DHS, FTC, and EPA) each omitted one applicable requirement. At DHS, the policies and guidance did not state that draft documents circulated on e-mail systems are potential federal records. Department officials told us that they recognized that their policies did not specifically address the need to assess the records status of draft documents, and said they planned to address the omission during an ongoing effort to revise the policies. At EPA and FTC, the e-mail management policy did not instruct staff on the management and preservation of e-mail messages sent or received from nongovernmental e-mail systems. According to officials at both agencies, such instructions were not included because agency employees were instructed not to use such accounts for agency business. However, whenever access to such external systems is available at an agency, the agency should provide these instructions. If agency records management policies and guidance are not complete, agency e-mail records may be at increased risk of loss. If agencies do not state that draft documents circulated on e-mail systems are potential records, agency officials may not preserve such record materials. If agencies do not instruct staff on the management and preservation of e- mail messages sent or received from nongovernmental e-mail systems, officials may create or receive e-mail records in external systems that may not be preserved in recordkeeping systems. In the course of our review at EPA, officials told us that this situation may have arisen: they had discovered that certain e-mail messages for a previous Administrator, possibly including records, had not been saved. According to these officials, they had discovered an e-mail message from a former Acting Administrator instructing a private consultant not to use the Administrator’s EPA e-mail account to discuss a sensitive government issue (World Trade Center issues) but to use a personal e-mail account. EPA officials reported this incident to NARA on April 11, 2008, in a letter that also described the agency’s response to the incident and planned safeguards to avoid such incidents in the future; these safeguards included the release of a policy statement prohibiting the use of non-EPA messaging systems for the conduct of agency business and a review of e-mail account auto-delete settings. NARA replied on April 30 that the safeguards EPA planned appeared appropriate. Finally, HUD’s policies and guidance did not include, or did not implement, three of eight applicable e-mail records management requirements. For one requirement, HUD’s policy was inconsistent with NARA’s regulations, and it was silent on two of the requirements. HUD did not fully implement the requirement to ensure that staff are capable of identifying federal records because its e-mail policy states that only the sender is responsible for reviewing the record status of an e-mail. However, NARA’s regulation defines e-mail messages as material either created or received on electronic mail systems. HUD officials acknowledged that the department’s policy omits the recipient’s responsibility for determining the record status of e-mail messages and stated that the e-mail policy fell short of fully implementing NARA regulations in this regard because the department’s practice is not to use e- mail for business matters in which official records would need to be created. However, this practice does not remove the requirement for agency employees to assess e-mail received for its record status, because the agency cannot know that employees will not receive e-mail with record status; the determination of record status depends on the content of the information, not its medium. In addition, two other requirements were missing from HUD’s policy: it did not state, as required, that recordkeeping copies of e-mail should not be stored in e-mail systems or that backup tapes should not be used for recordkeeping purposes. HUD officials stated that they considered that these requirements were met by a reference in their policy to the NARA regulations in which these requirements appear. However, this reference is too general to make clear to staff that e-mail systems and backup tapes are not to be used for recordkeeping. Table 4 summarizes the results for the four agencies. If requirements for e-mail management are not included in agency records management policies and guidance, agency e-mail records may be at increased risk of loss. The loss of records that are important for documenting government functions, activities, decisions, and other important transactions could potentially impair agencies’ ability to carry out their missions. E-mail messages that qualified as records were not being appropriately identified and preserved for 8 of the 15 senior officials we reviewed. Senior officials at three agencies did not consistently conform to key requirements in NARA’s regulations for e-mail records; only at FTC did the four senior officials fully follow these requirements. The other three agencies showed varying compliance: three officials at DHS, two officials at EPA, and three officials at HUD were not following required e-mail recordkeeping practices. Factors contributing to the inconsistent e-mail recordkeeping practices include inadequate training and oversight. Other factors included the difficulty of managing large volumes of e-mail in paper-based recordkeeping systems and the stated practice at one agency that e-mail would not be used for record material. As described, the four agencies primarily used “print and file” recordkeeping systems, which require agency staff to print out e-mail messages for filing as the official recordkeeping copies in designated filing systems. Each agency’s policy also required the preservation of e-mail transmission data, distribution lists, and acknowledgments. DHS. At DHS, our review covered three senior officials because, according to DHS officials, the Secretary of Homeland Security did not use e-mail: these officials told us that the Secretary did not have a DHS e-mail account, and that he did not conduct any official communications using external nongovernmental e-mail systems. For the remaining three officials, the e-mail management practices did not fully comply with the requirements. None of the e-mails of the senior officials were reviewed for their status as a record or filed in an appropriate recordkeeping system. Instead, the officials were using their e- mail accounts to store all e-mails. Two of the three officials personally managed their e-mail accounts; the third shared this responsibility with a member of his staff. The staff of one of the officials who managed his own e-mail had access to the official’s e-mail account, but the staff reviewed or accessed these only if instructed to do so by the official. The department said that the third official’s office administrator had access to calendar functions only. According to one of these senior officials, storing e-mails on the computer is convenient for searching and retrieving. It was this official’s opinion that this approach was safe from a legal standpoint because no e-mails were deleted. Nonetheless, using an e-mail system to retain all e-mails indefinitely increases the difficulty of performing searches based on categories of records; in contrast, such searches are facilitated by a true recordkeeping system. Further, if e-mail records are not stored in an appropriate recordkeeping system (paper or electronic), there is reduced assurance that they are useful and accessible to the agency as needed, or that they will be retained for the appropriate period. EPA: At EPA, the e-mail records of two of the four senior officials were being managed in accordance with key requirements reviewed. For these two senior officials, one of whom was the agency head, e-mail records were stored in paper-based recordkeeping systems. The EPA Administrator had two EPA e-mail accounts, one intended for messages from the public and one for communicating with select senior EPA officials (not intended for use by the public). In the paper-based recordkeeping system, of 25 e-mail records inspected, all included transmission data and distribution lists, as required. For the nonpublic account, staff provided eight e-mail records for inspection, all of which also included transmission data and distribution lists. According to EPA officials, the nonpublic account generated few records because the Administrator receives most of his information from other sources, including face-to-face briefings and meetings. For the second senior official, administrative staff told us that the official reviewed e-mail personally and forwarded records to the staff for printing and filing in a paper-based recordkeeping system that followed the agency’s records schedules. We selected 20 e-mails from the official’s files for examination. These files were associated with four EPA records schedules. All of the e-mails included transmission data and distribution lists as required. The e-mail records of two other senior officials were not being managed in compliance with requirements, because e-mail records were not being stored in appropriate recordkeeping systems, but rather in the e-mail system: One of these officials was in the process of migrating e-mail records from the e-mail system to ECMS. This official had been storing e-mail records in e-mail system folders since January 2006, in anticipation of the rollout of the ECMS, and had not been using a paper-based recordkeeping system in the interim. The e-mail system’s folders were organized according to the agency’s records schedules to facilitate the transfer, which was ongoing. Because this senior official did not store e-mail records in a paper-based recordkeeping system during this transition, the official’s e-mail account was being used as a recordkeeping system, which is contrary to regulation. However, when the transition to the electronic recordkeeping system is complete, the new system should provide the opportunity for this official’s recordkeeping practices to be brought into compliance with requirements. The second official was also saving all e-mail in the e-mail system. EPA officials stated that most of the senior official’s e-mail was sent to an administrative assistant, who was responsible for identifying and maintaining the records received and filing them accordingly. However, the administrative assistant for this official stated that although she had been briefed on maintaining and preserving the senior official’s calendar in a recordkeeping system, she had not received guidance or training in how to preserve or categorize the official’s e-mail for recordkeeping purposes. In addition, the assistant stated that all e-mails remained stored in the e- mail system where they could be retrieved if necessary. FTC: The four senior officials at FTC were managing e-mail in compliance with key requirements reviewed. These officials were the Chairman and three Commissioners. According to an FTC official, the Commissioners do not discuss substantive issues in e-mails to one another because of the possibility that such group e-mails could be construed as meetings subject to the Sunshine Act, which must be open to the public. FTC staff told us that the then-Chairman and two Commissioners delegated part or all of the responsibility for e-mail management; the remaining Commissioner personally managed e-mails. E-mails with record status were to be printed and filed in the commission’s paper-based recordkeeping systems. The FTC recordkeeping systems contained e-mail records of the four officials; of the 155 e-mail records inspected, all included the required distribution lists and transmission data. HUD: One of the four senior officials at HUD was managing e-mail in compliance with key requirements, but for the other three officials, e-mail records were not stored in appropriate recordkeeping systems. The e-mail records for the agency head were being managed in accordance with key requirements. According to HUD officials, management of e-mails for the agency head was delegated to staff: that is, the agency head’s e- mails were forwarded by his administrative assistant to the Office of the Executive Secretariat, where they were reviewed for record status and preserved as necessary in paper files. Staff from the Office of the Executive Secretariat flagged 10 e-mail records using the department’s correspondence tracking system, which were then retrieved from the paper-based recordkeeping system for inspection; all of these files included the required distribution lists and transmission data. The practices of the three other senior officials varied, except that for all three, they or their staff stated that the officials retained e-mail messages in the e-mail system. One senior official told us that he read his own e- mail and forwarded messages to staff to determine record status. Another official’s staff stated that the staff was responsible for managing e-mail, but that the official would determine what should be printed and filed. The third official’s staff stated that the official did not review e-mails for record status but forwarded all program-related e-mails to staff, who would decide which e-mails should be included in the program files as records. Neither the three senior officials nor several of their staff had received records management training. HUD provided copies of e-mail messages from one senior official for review, but there was no evidence that the messages were stored in an appropriate recordkeeping system, and HUD officials stated that the provided e-mails were not records. They offered to provide similar nonrecord messages for the two other officials, but we declined to review them because the messages would not have addressed the question of whether the officials were storing e-mail records in appropriate recordkeeping systems. Thus, for these three officials the department did not provide examples of printed e-mail records that had been stored in appropriate recordkeeping files. According to department officials, this situation is explained by HUD’s practice of not using e-mail for business matters that would produce records. According to department officials, official business is conducted through paper processes, some electronic processes (such as Web-based systems), but rarely through e-mail. Nonetheless, although e-mail may rarely rise to the level of a record under paper-based processes, it does not follow that no e-mail records are ever created or received, as shown by the e-mail records maintained by the department’s Executive Secretariat and the Office of Healthy Homes and Lead Hazard Control. The weakness in HUD’s policy regarding responsibility for determining which e-mails are records, combined with the lack of training in e-mail records management, reduces the department’s assurance that those e-mail messages that are records are being appropriately identified. Factors contributing to the inconsistent practices at the three agencies include inadequate training and oversight, as well as the difficulties of managing large volumes of e-mail with the tools and resources available, which in most cases do not include electronic recordkeeping systems. The regulations require agencies to develop adequate training to ensure that staff implement agency policies. All four agencies have issued guidance and developed training materials, and all state that they performed records management training. For example, according to DHS officials, all three senior officials and staff had received records management training as new employees. However, DHS and HUD had no documentation to indicate that employees had received such training, and our review of practices found instances in which staff did not understand their recordkeeping responsibilities for e-mail and stated that they had not been informed of them or received training. For example, three senior HUD officials had not received training on records management. Staff explained that formal briefings had last taken place at that time. Agencies must also periodically evaluate their records management programs, including periodic monitoring of staff determinations of the record status of materials. However, the three agencies have not fully developed and implemented oversight mechanisms, and do not determine the extent to which senior officials or other staff are following applicable requirements for e-mail records. According to DHS, it has initiated oversight and review activities, but these are not yet at the pilot stage because of other demands on records management staff, such as completion of records scheduling. EPA has developed an oversight plan and has pilot-tested a records management survey tool, but it has not yet begun agencywide reviews. It plans to fully deploy this tool when ECMS is fully implemented. HUD had not initiated oversight and review activities, according to officials, because of its practice of not using e-mail for matters that would necessitate the creation of official records. These officials stated that when the department’s modernized system for records and document management is in place, the department’s e-mail policies will be updated and appropriate oversight and review activities put in place. Unless agencies train staff adequately in records management and perform periodic evaluations or establish other controls to ensure that staff receive training and are carrying out their responsibilities, agencies have little assurance that e-mail records are appropriately identified, stored, and preserved. Further, keeping large numbers of record and nonrecord messages in e-mail systems potentially increases the time and effort needed to search for information in response to a business need or an outside inquiry, such as a Freedom of Information Act request. The volume of e-mail is also described as contributing to e-mail records management shortcomings. Agency officials and staff referred to the difficulty of managing large volumes of e-mail, suggesting that limited resources contributed to their inability to fully comply with records management and preservation policies. To help ensure that e-mail records are managed appropriately, it is helpful to incorporate recordkeeping into the process by which agency staff create and respond to mission-related e- mail. Because this process is electronic, the most straightforward approach is to perform e-mail recordkeeping electronically. All four agencies, however, still rely either entirely or primarily on paper for their recordkeeping systems, even for “born digital” records like e-mail. Weaknesses in the processes in place at three of the four agencies reviewed raise questions about the appropriateness of paper recordkeeping processes for their e-mail records. Simply devoting more resources to paper records management may be neither efficient nor cost- effective, and the agencies have recognized that this is not a tenable long- term solution. EPA is beginning a transition to electronic recordkeeping, and HUD and DHS have plans focused on future enterprisewide transitions. Managing electronic documents, including e-mail, in electronic recordkeeping systems would potentially provide the efficiencies of automation and avoid the expenditure of resources on duplicative manual processes and storage. It is important to recognize, however, that moving to electronic recordkeeping has proved not to be a simple or easy process and that projects at large agencies have presented the most significant challenges. For projects of all sizes, agencies must balance the potential benefits of electronic recordkeeping against the costs of redesigning business processes and investing in technology. NARA has called the decision to move to electronic recordkeeping inevitable. Nonetheless, like other information technology investments, such a move requires careful planning in the context of the specific agency’s circumstances, in addition to well-managed implementation. NARA’s limited performance of its oversight responsibilities leaves it with little assurance that agencies are effectively managing records, including e- mail records, throughout their life cycle. NARA has an organizational preference for partnering with and supporting agencies’ records management activities, which is appropriate for many of its guidance and assistance responsibilities. However, this preference has led NARA to avoid performing oversight activities that it judged to be perceived negatively—the full-scale inspections/evaluations that it performed in previous years. Although it has performed studies that provide it with insights into records management issues and it has taken action in response to the findings, it has not developed means to evaluate the state of federal records management programs and practices. As a result, NARA’s oversight of federal records management programs, including management of e-mail, has been limited. Further, NARA’s limited reporting on problems and solutions identified at individual agencies reduces its own ability to hold agencies accountable for addressing identified problems, as well as reducing the ability of agencies to learn from the experience of others. At the four agencies reviewed, e-mail records management policies were generally compliant with NARA regulations, with some exceptions. If policies do not fully conform to regulatory requirements, it increases the likelihood that those requirements will not be met in practice. Senior officials at three of the four agencies stored e-mail records in e-mail systems, rather than in recordkeeping systems, which is not in accordance with NARA’s regulations. Factors contributing to this noncompliance generally included insufficient training and oversight regarding recordkeeping practices, as well as the onerousness of handling large volumes of e-mail. Providing adequate training and oversight is a prerequisite for improvement, but real improvements in e-mail recordkeeping may require replacing the paper-based recordkeeping processes currently in place. Properly implemented, the transition to electronic recordkeeping of e-mail has the potential not only to reduce the burden of e-mail management but also to provide positive benefits in improving the usefulness and accessibility of records. To better ensure that federal records, including those that originated as e- mail messages, are appropriately identified, retained, and archived, we recommend that the Archivist of the United States develop and implement an approach to oversight of agency records management programs that provides adequate assurance that agencies are following NARA guidance, including developing various types of inspections, surveys, and other means to evaluate the state of agency records and records management programs; developing criteria for using these means of assessment that ensure that they are regularly performed; and regularly report to the Congress and OMB on the findings, recommendations, and agency responses to its oversight activities, as required by law. In addition, we recommend that the Administrator of the Environmental Protection Agency revise the agency’s policies to ensure that they appropriately reflect NARA’s requirement on instructing staff on the management and preservation of e-mail messages sent or received from nongovernmental e-mail systems and develop and apply oversight practices, such as reviews and monitoring of records management training and practices, that are adequate to ensure that policies are effective and that staff are adequately trained and are implementing policies appropriately. We further recommend that the Chairman of the Federal Trade Commission revise the commission’s policies to ensure that they appropriately reflect NARA’s requirement to instruct staff on the management and preservation of e-mail messages sent or received from nongovernmental e-mail systems. We further recommend that the Secretary of Homeland Security revise the department’s policies to ensure that they appropriately reflect NARA’s requirement to state that draft documents circulated on e-mail systems are potential federal records and develop and apply oversight practices, such as reviews and monitoring of records management training and practices, that are adequate to ensure that policies are effective and that staff are adequately trained and are implementing policies appropriately. Finally, we recommend that the Secretary of Housing and Urban Development revise the department’s policies to ensure that they appropriately reflect NARA’s requirements to ensure that staff is capable of identifying federal records and to state that e-mail systems must not be used to store recordkeeping copies of e-mail records (other than those exceptions provided in the regulation) and that e-mail system backup tapes should not be used for recordkeeping purposes, and develop and apply oversight practices, such as reviews and monitoring of records management training and practices, that are adequate to ensure that policies are effective and that staff are adequately trained and are implementing policies appropriately. We provided a draft of this report to NARA, DHS, EPA, FTC, and HUD for review and comment. Three agencies provided written comments (which are reproduced in apps. II to IV), and two provided comments via e-mail. All five agencies indicated that they were implementing or intended to implement our recommendations. Three of the five agencies generally agreed with our findings and recommendations. One agency provided information about its use of outside e-mail accounts, and one agency agreed to implement our recommendations but questioned aspects of our report. In written comments, the Archivist of the United States stated that NARA generally agreed with our draft report and would develop an action plan to implement our recommendation. The Archivist also provided technical comments, and we clarified our report to address each of them. (see app. II). In e-mail comments, the Director, Records, Publications, and Mail Management at DHS, stated that the department agreed with our draft report and that it correctly represented the condition at the time of the review. The Director also said that future DHS records management policy documents would be revised to reflect our recommendations. In written comments, the Chief Information Officer of EPA stated that the agency accepted our two recommendations. In addition, she provided additional information on the EPA records management program. Finally, this official provided technical comments, which we addressed as appropriate; our assessment of these comments is contained in appendix III. In e-mail comments, an official from FTC’s Office of the General Counsel stated that FTC had instructed staff not to use outside e-mail accounts for official business, but it was nonetheless taking action to implement our recommendation by issuing a notice to staff regarding policies and procedures for e-mail records, which included a statement that work- related e-mails inadvertently sent or received from non-FTC accounts must be handled in accordance with the agency’s records preservation policies and procedures. Our draft recognized FTC’s instruction not to use outside accounts for official business, but also noted that that FTC did not totally prohibit access to such accounts. Because access to outside accounts was available, FTC was required by NARA regulations to provide staff with guidance on the proper handling of e-mail records sent or received through such accounts. FTC also provided technical comments, which we incorporated as appropriate. In written comments, HUD’s Acting Chief Information Officer stated that HUD planned to implement our recommendations, but also stated that our draft was inaccurate in three areas: The Acting CIO questioned the clarity of a figure we included to illustrate a decision process that could be used to decide if an e-mail message is a record. As noted in our draft, the illustration is provided as an example to illustrate the kinds of factors that may be considered when deciding whether an e-mail message is a record. The Acting CIO disagreed with our conclusions regarding HUD’s compliance with the requirements we reviewed, stating that the department’s records policies comply with all these requirements because they incorporate NARA’s regulations by reference. While our draft recognized the reference to NARA regulations in HUD’s policy, we concluded that such a reference was not adequate to comply with NARA regulations. As we stated in our draft, the reference in HUD’s policy is too general to make clear to HUD staff which practices are prohibited. In addition, HUD did not establish procedures to implement the requirements in question, as the regulations require. The Acting CIO questioned the accuracy of a statement on the number of senior officials whose files were reviewed. Our evidence shows that our statement was accurate, but we revised it to include further clarifying detail. We provide more detailed responses to these points in appendix IV. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies of this report to the Archivist of the United States, the Administrator of the Environmental Protection Agency, the Chairman of the Federal Trade Commission, the Secretary of Homeland Security, and the Secretary of Housing and Urban Development. Copies will be made available to others on request. In addition, this report will be available at no charge on our Web site at www.gao.gov. If you have questions about this report, please contact me at (202) 512- 6240 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. assess to what extent the National Archives and Records Administration (NARA) provides oversight of federal records management programs and practices, particularly with regard to e-mail, describe processes followed by selected federal agencies to manage assess to what extent the selected agencies’ e-mail records management policies comply with federal requirements, and assess compliance of selected senior officials with key e-mail recordkeeping requirements. To determine the extent to which NARA provides oversight of federal agencies for managing and preserving federal e-mail records, we analyzed applicable laws, regulations, and guidance; reviewed NARA’s oversight activities from 2003 to 2007, including its reports to OMB and the Congress on records management activities; reviewed recent NARA’s records management reports; and interviewed NARA officials. To address our other objectives, we judgmentally selected four agencies for review based upon several factors. First, we identified four general government functions from those functions that NARA identified in a 2004 resource allocation study as having records that had a direct and significant impact on the rights, welfare, and/or well-being of American citizens or foreign nationals: homeland security, health, economic development, and environmental management. (NARA classified these functions as high risk for rights/accountability.) Next, using NARA’s analysis, we compiled a list of the federal agencies and their components that performed those high-risk functions. For each identified agency, we further classified it according to agency structure (a department with component bureaus or agencies, a department with an office structure, an independent agency, or an independent commission) and size (a large department over 150,000 employees, a small department less than 11,000 employees, a small independent agency less than 1,100 employees, or a large independent agency over 18,000 employees). We then judgmentally selected four agencies from the high-risk list that presented various combinations of structure and size. These were as follows: Department of Homeland Security (U.S. Immigration and Customs Enforcement) Rated by NARA as high on rights and accountability for records in the Homeland Security: Immigrant and Non-Citizen Services function Department with component agencies Over 162,000 employees Department of Housing and Urban Development (Office of Healthy Homes and Lead Hazard Control) Rated by NARA as high on rights and accountability for records in the Health: Illness Prevention function Department with offices Less than 11,000 employees Rated by NARA as high on rights and accountability for records in the Environmental Management: Environmental Remediation function Independent agency Rated by NARA as high on rights and accountability for records in the Economic Development: Business, Trade, Trust, and Financial Oversight Independent commission At each of the four selected agencies, we assessed e-mail records management policies of the agency; described processes followed by agencies to manage e-mail records, specifically reviewing e-mail records management practices of a business unit associated with the high-risk function; and assessed compliance of four senior officials with key e-mail recordkeeping requirements. We selected a business unit from each organization that (1) performed the particular line of business we identified in our agency selection process and (2) had permanent records that NARA rated high on risk to accountability and citizen rights. Table 5 identifies the business unit we selected at each agency. We also selected four senior officials at each agency. At DHS, EPA, and HUD, we selected the head of the agency, the head of the office responsible for policy, a randomly selected senior official, and the most senior agency official associated with the business unit we inspected. At FTC, we selected the Chairman and three Commissioners. The selected senior officials are listed in table 6. To describe the agencies’ e-mail records management practices, we analyzed documents, interviewed appropriate officials at the agency (including business unit officials and staff), and performed limited inspections of selected e-mail records. To assess each agency’s e-mail records management policies, we reviewed the agency’s published policy documents, including formal policies and operational manuals, as well as agency-provided responses to a data collection instrument on e-mail management, and compared their contents to the e-mail related requirements in NARA’s records management regulations. To assess compliance of senior officials with key e-mail recordkeeping requirements, we analyzed documents, used data collection instruments to gather information from the senior officials, their staffs, or other appropriate officials, and inspected selected e-mail records. We asked each agency to provide examples of senior officials’ e-mail messages stored as records to corroborate their responses. We then analyzed the information provided by the agencies and assessed it against the e-mail requirements in NARA’s regulations on federal records. We did not attempt to assess the extent to which the agencies’ staff correctly identified e-mail records or the extent to which the agencies’ records appropriately included e-mail. The four data collection instruments we used are briefly described in table 7. We performed our work at agency offices in the Washington, D.C., metropolitan area. We conducted this performance audit from April 2007 to May 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. 1. We clarified our discussion of this topic. 2. We clarified our discussion of this topic. 3. We removed the reference to the 180 day limit. 4. In our discussion of the exchange between EPA and NARA on the incident involving possible loss of e-mail records, we included information on EPA’s plan to promulgate a policy on the use on non- EPA e-mail systems. 5. See comment 4. EPA plans to promulgate a policy prohibiting the use of non-EPA e-mail systems for EPA business. 6. We updated our discussion of this topic to reflect NARA’s response. 7. We do not use EPA’s terminology because we do not find “primary” and “secondary” to be useful descriptions. However, we revised our discussion to clarify the references. 8. See note 7. 9. If EPA implements the oversight mechanism we recommend, it will help ensure that e-mail records are properly identified and protected. 10. We updated our discussion to indicate when EPA plans to deploy its survey tool. The following are GAO’s comments on the on the HUD’s written response dated May 28, 2008, to our draft report. 1. As noted in our report, the described decision process is an example of one that could be used to determine whether an e-mail message is a record. We did not state that the process is a requirement that must be followed by any particular agency. 2. See comment 1. 3. See comment 5. 4. See comment 5. 5. Our draft noted that HUD incorporated Parts 1220, 1222, and 1228 of NARA’s regulations by reference. However, the policy requirements at issue are contained in Part 1234 of NARA’s regulations. In its comments, HUD argues that the Parts it cites incorporate Part 1234 by reference. We do not agree with HUD that this type of indirect reference is a sufficient or effective way of informing HUD staff of their e-mail recordkeeping responsibilities as well as of prohibited practices. In addition, HUD did not fully implement the applicable e- mail management requirements because it did not establish procedures to implement appropriate procedures that protect e-mail records. 6. See comment 5. 7. The text suggested by HUD is incorrect in that we requested copies of e-mail records from all three selected officials. We revised our report to provide additional detail on this. 8. We agree that enhancing HUD’s policies on e-mail records as we recommend could increase their usability by all HUD officials and staff; among other things, this could clarify for HUD staff which practices are prohibited. 9. We agree that not every e-mail is an official record, and we emphasized this point in our report. However, we also emphasized that the content of a communication, not its form, determines its record status. In addition to the individual named above, Mirko Dolak and James R. Sweetman, Jr. (Assistant Directors); Monica Anatalio; Timothy Case; Barbara Collier; Pamlutricia Greenleaf; Jennifer Franks; Tarunkant N. Mithani; Sushmita Srikanth; and Jennifer Stavros-Turner made key contributions to this report.
Federal agencies are increasingly using electronic mail (e-mail) for essential communication. In doing so, they are potentially creating messages that have the status of federal records, which must be managed and preserved in accordance with the Federal Records Act. Under the act, both the National Archives and Records Administration (NARA) and federal agencies have responsibilities for managing federal records, including e-mail records. In view of the importance that e-mail plays in documenting government activities, GAO was asked, among other things, to review the extent to which NARA provides oversight of federal records management, describe selected agencies' processes for managing e-mail records, and assess these agencies' e-mail policies and key practices. To do so, GAO examined NARA guidance, regulations, and oversight activities, as well as e-mail policies at four agencies (of contrasting sizes and structures) and the practices of selected officials. Although NARA has responsibilities for oversight of agencies' records and records management programs and practices, including conducting inspections or surveys, performing studies, and reporting results to the Congress and the Office of Management and Budget (OMB), in recent years NARA's oversight activities have been primarily limited to performing studies. NARA has conducted no inspections of agency records management programs since 2000, because it uses inspections only to address cases of the highest risk, and no recent cases have met its criteria. In addition, NARA has not consistently reported details on records management problems or recommended practices that were discovered as a result of its studies. Without more comprehensive evaluations of agency records management, NARA has limited assurance that agencies are appropriately managing the records in their custody and that important records are not lost. The four agencies reviewed generally managed e-mail records through paper-based processes, rather than using electronic recordkeeping. A transition to electronic recordkeeping was under way at one of the four agencies, and two had long-term plans to use electronic recordkeeping. (The fourth agency had no current plans to make such a transition.) Each of the business units that GAO reviewed (one at each agency) maintained "case" files to fulfill its mission and used these for recordkeeping. The practice at the units was to include e-mail printouts in the case files if the e-mail contained information necessary to document the case--that is, record material. These printouts included transmission data and distribution lists, as required. All four agencies had e-mail records management policies that addressed, with a few exceptions, the requirements in NARA's regulations. However, the practices of senior officials at those agencies did not always conform to requirements. Of the 15 senior officials whose practices were reviewed, the e-mail records for 7 (including all 4 at one agency) were managed in compliance with requirements. (One additional official was selected for review but did not use e-mail.) The other 8 officials generally kept e-mail messages, record or nonrecord, in e-mail systems that were not recordkeeping systems. (Among other things, recordkeeping systems allow related records to be categorized according to their business purposes.) If e-mail records are not kept in recordkeeping systems, they may be harder to find and use, as well as being at increased risk of loss from inadvertent or automatic deletion. Factors contributing to noncompliance included insufficient training and oversight as well as the difficulties of managing large volumes of e-mail. Without periodic evaluations of recordkeeping practices or other controls to ensure that staff are trained and carry out their responsibilities, agencies have little assurance that e-mail records are properly identified, stored, and preserved.
You are an expert at summarizing long articles. Proceed to summarize the following text: In the 124 years since the first national park, Yellowstone, was created, the national park system has grown to include 369 park units. In all, these units cover more than 80 million acres of land, an area larger than the state of Colorado. The mix of park units is highly diverse and includes more than 20 types; these range from natural resource preserves encompassing vast tracts of wilderness to historic sites and buildings in large urban areas. The Park Service’s mission is twofold: to provide for the public’s enjoyment of these parks and to protect the resources so that they will remain unimpaired for the enjoyment of future generations. The Park Service’s 1980 survey of threats found not only that the parks’ resources were being harmed but also that improvements were needed in determining what cultural and natural resources existed in each park, what their condition was, and how and to what extent they were being threatened. In response, the Park Service called for the development of resource management plans to identify the condition of each park’s resources and the problems with managing them, including significant threats. Three times since 1987, we have reported that the Park Service has made limited progress in meeting the information and monitoring needs it had identified in 1980. Our findings included incomplete, out-of-date, or missing resource management plans and an incomplete inventory of threats, their sources, or mitigating actions. In 1994, after examining the external threats to the parks, we recommended that the Park Service revise its resource management planning system to identify, inventory, categorize, and assign priorities to these threats; describe the actions that could be taken to mitigate them; and monitor the status of the actions that had been taken. Such an inventory has not been implemented, according to Park Service headquarters officials, because of funding and hiring freezes that have prevented the completion of needed changes to the planning system’s guidelines and software. In commenting on a draft of this report, the Park Service said that implementing this recommendation is no longer appropriate. The Park Service’s comments and our evaluation are presented in the agency comments section of this report. For internal, as for external threats, the Park Service has limited systemwide information. It does not have a national inventory of internal threats that integrates information it already has, and many of its individual units do not have a readily available database on the extent and severity of the threats arising within their borders. However, in commenting on this report, Park Service officials told us that headquarters has the systemwide information it needs to make decisions and that many decisions are made at the park level, where the superintendents decide what information is needed. They added that rather than developing a database of threats to resources, they need better data on the condition of resources to allow park managers to identify those that are the most threatened. According to headquarters officials, the Park Service has developed systems focused on particular categories of resources. Park managers and headquarters staff use these systems to identify, track, or assess problems, resource conditions, or threats. An overview of these systems follows: The Museum Collections Preservation and Protection Program requires parks to complete a checklist every 4 years on the deficiencies in the preservation, protection, and documentation of their cultural and natural resource collections. An automated system is being developed to collect these data. The data are used to make funding decisions. Another system for monitoring the condition of a cultural resource is the List of Classified Structures, which inventories and gives general information on historic structures in the parks. Headquarters officials said that the list is not complete because of insufficient funding. Headquarters rangers report that automated systems are in place to track illegal activities in parks, such as looting, poaching, and vandalism, that affect cultural and natural resources. Headquarters officials report that the inventory and information on the condition of archeological resources, enthnographic resources, and cultural landscapes are poor at present but that there are plans to develop improved systems, if staffing and funding allow. Although the Park Service’s guidance requires the parks to develop resource management plans, it does not require the plans to include specific information on the internal and external threats facing the parks. Such information would assist managers of the national park system in identifying the major threats facing parks on a systemwide basis, and it would give the managers of individual parks an objective basis for management decisions. At the eight parks studied, the managers identified 127 internal threats that directly affected natural and cultural resources. Most of these threats fell into one of five broad categories: the impact of private inholdings or commercial development within the parks, the results of encroachment by nonnative wildlife or plants, the damage caused by illegal activities, the adverse effects of normal visits to the parks, and the unintended adverse effects of the agency’s or park managers’ actions (see fig. 1). The majority of the threats affected natural resources, such as plants and wildlife, while the remainder threatened cultural resources, such as artifacts, historic sites, or historic buildings. (See app. I for a summary of the threats in each category at each of the eight parks.) Overall, the park managers we visited said that the most serious threats facing the parks were shortages in staffing, funding, and resource knowledge. The managers identified 48 additional threats in these categories. We classified these as indirect threats to cultural and natural resources because, according to the managers, the shortages in these areas were responsible for many of the conditions that directly threaten park resources. (See app. II for a list of these threats at the eight parks.) In addition, the managers identified other threats in such categories as laws or regulations, agency policies, and park boundaries. After reviewing the information about these threats provided by park managers in documents and interviews, we decided that the threats were indirect and should not be listed among the direct threats. In gathering data for each park, we also identified threats to services for visitors. Our analysis showed that many of these threats also appeared as threats to cultural and natural resources. We did not compile a list of threats to services for visitors because this report focuses on cultural and natural resources. Private inholdings and commercial development within park boundaries accounted for the largest number of specific threats. The managers of seven of the eight parks we reviewed identified at least one threat in this category. For example, at Olympic National Park in Washington State, the managers said that the homes situated on inholdings along two of the park’s largest lakes threatened groundwater systems and the lake’s water quality. At Lake Meredith National Recreation Area in Texas, the managers were concerned about the impact of the frequent repair and production problems at about 170 active oil and gas sites (see fig. 2) and the development of additional sites. At the Minute Man National Historical Park, the long, linear park is bisected by roads serving approximately 20,000 cars per day. The traffic affects cultural resources, such as nearby historic structures; natural resources, such as populations of small terrestrial vertebrates (e.g., the spotted salamander and spotted turtle); and visitors’ enjoyment of the park (see fig. 3). Encroachment by nonnative wildlife and plants—such as mountain goats, trout introduced into parks’ lakes and streams, and nonnative grasses and other plants—accounted for the second largest number of reported threats. The managers at all of the parks we reviewed identified at least one threat in this category. At Arches National Park in Utah, for example, the managers cited the invasion by a plant called tamarisk in some riverbanks and natural spring areas. In its prime growing season, a mature tamarisk plant consumes about 200 gallons of water a day and chokes out native vegetation. At Olympic National Park, nonnative mountain goats introduced decades ago have caused significant damage to the park’s native vegetation. The goats’ activity eliminated or threatened the survival of many rare plant species, including some found nowhere else. Controlling the goat population reduced the damage over 5 years, as the contrast between figures 4a and 4b shows. Illegal activities, such as poaching, constituted the third main category of threats. The managers at the eight parks reported that such activities threatened resources. For example, at Crater Lake National Park in Oregon, the managers believe that poaching is a serious threat to the park’s wildlife. Species known to be taken include elk, deer, and black bear. At both Crater Lake and Olympic national parks, mushrooms are harvested illegally, according to the managers. The commercial sale of mushrooms has increased significantly, according to a park manger. He expressed concern that this multimillion-dollar, largely unregulated industry could damage forest ecosystems through extensive raking or other disruption of the natural ground cover to harvest mushrooms. Similar concern was expressed about the illegal harvesting of other plant species, such as moss and small berry shrubs called salal (see fig. 5). About 30 percent of the internal threats identified by park managers fell into two categories—the adverse effects of (1) people’s visits to the parks and (2) the Park Service’s own management actions. The number of recreational visits to the Park Service’s 369 units rose by about 5 percent over the past 5 years to about 270 million visits in 1995. Park managers cited the effects of visitation, such as traffic congestion, the deterioration of vegetation off established trails, and trail erosion. The threats created unintentionally by the Park Service’s own management decisions at the national or the park level included poor coordination among park operations, policies calling for the suppression of naturally caused fires that do not threaten human life or property, and changes in funding or funding priorities that do not allow certain internal threats to parks’ resources to be addressed. For example, at Gettysburg National Military Park, none of the park’s 105 historic buildings have internal fire suppression systems or access to external hydrants because of higher-priority funding needs. Park managers estimated that about 82 percent of the direct threats they identified in the eight parks we reviewed have caused more than minor damage to the parks’ resources. We found evidence of such damage at each of the eight parks. According to the managers, permanent damage to cultural resources has occurred, for example, at Indiana Dunes National Lakeshore in Indiana and at Arches National Park in Utah. Such damage has included looting at archeological sites, bullets fired at historic rock art, the deterioration of historic structures, and vandalism at historic cemeteries. (See figs. 6 and 7.) At both of these parks, the managers also cited damage to natural resources, including damage to vegetation and highly fragile desert soil from visitors venturing off established trails and damage to native plants from the illegal use of off-road vehicles. At Gettysburg National Military Park, the damage included the deterioration of historic structures and cultural landscapes, looting of Civil War era archeological sites, destruction of native plants, and deterioration of park documents estimated to be about 100 years old, which contain information on the early administrative history of the park. Figure 8 shows these documents, which are improperly stored in the park historian’s office. Nearly one-fourth of the identified direct threats had caused irreversible damage, according to park managers (see fig. 9). Slightly more than one-fourth of the threats had caused extensive but repairable damage. About half of the threats had caused less extensive damage. Some/minor or no damage (can be repaired) Extensive damage (can be repaired) The damage to cultural resources was more likely to be permanent than the damage to natural resources, according to park managers (see fig. 10). Over 25 percent of the threats to cultural resources had caused irreversible damage, whereas 20 percent of the threats to natural resources had produced permanent effects. A Park Service manager explained that cultural resources—such as rock art, prehistoric sites and structures, or other historic properties—are more susceptible to permanent damage than natural resources because they are nonrenewable. Natural resources, such as native wildlife, can in some cases be reintroduced in an area where they have been destroyed. Generally, park managers said they based their judgments about the severity of damage on observation and judgment rather than on scientific study or research. In most cases, scientific information about the extent of the damage was not available. For some types of damage, such as the defacement of archeological sites, observation and judgment may provide ample information to substantiate the extent of the damage. But observation alone does not usually provide enough information to substantiate the damage from an internal threat. Scientific research will generally provide more concrete evidence identifying the number and types of threats, the types and relative severity of damage, and any trends in the severity of the threat. Scientific research also generally provides a more reliable guide for mitigating threats. In their comments on this report, Park Service officials agreed, stating that there is a need for scientific inventorying and monitoring of resource conditions to help park managers identify the resources most threatened. At all eight parks, internal threats are more of a problem than they were 10 years ago, according to the park managers. They believed that about 61 percent of the threats had worsened during the past decade, 27 percent were about the same, and only 11 percent had grown less severe (see fig. 11). At seven of the eight parks, the managers emphasized that one of the trends that concerned them most was the increase in visitation. They said the increasing numbers of visitors, combined with the increased concentration of visitors in certain areas of many parks, had resulted in increased off-trail hiking, severe wear at campgrounds, and more law enforcement problems. At Arches National Park, for example, where visitation has increased more than 130 percent since 1985, greater wear and tear poses particular problems for the cryptobiotic soil. This soil may take as long as 250 years to recover after being trampled by hikers straying off established trails, according to park managers. Another increasing threat noted by managers from parks having large natural areas (such as Crater Lake, Olympic, and Lake Meredith) is the possibility that undergrowth, which has built up under the Park Service’s protection, may cause more serious fires. According to the managers, the Park Service’s long-standing policy of suppressing all park fires—rather than allowing naturally occurring fires to burn—has been the cause of this threat. Although the park managers believed that most threats were increasing in severity, they acknowledged that a lack of specific information hindered their ability to assess trends reliably. The lack of baseline data on resource conditions is a common and significant problem limiting park managers’ ability to document and assess trends. They said that such data are needed to monitor trends in resource conditions as well as threats to those resources. Park managers said that they believed some action had been taken in response to about 82 percent of the direct threats identified (see fig. 12). However, the Park Service does not monitor the parks’ progress in mitigating internal threats. Various actions had been taken, but many were limited to studying what might be done. Only two actions to mitigate an identified threat have been completed in the eight parks, according to the managers. However, they noted that in many cases, steps have been taken toward mitigation, but completing these steps was often hampered by insufficient funding and staffing. At Arches National Park, actions ranged from taking steps to remediate some threats to studying how to deal with others. To reduce erosion and other damage to sensitive soils, park managers installed rails and ropes along some hiking trails and erected signs along others explaining what damage would result from off-trail walking. Managers are also studying ways to establish a “carrying capacity” for some of the frequently visited attractions. This initiative by the Park Service stemmed from visitors’ comments about the need to preserve the relative solitude at the Delicate Arch (see fig. 13). According to park managers, about 600 visitors each day take the 1-1/2-mile trail to reach the arch. At Lake Meredith, to reduce the impact of vandalism, park managers are now replacing wooden picnic tables and benches with solid plastic ones. Although initially more expensive, the plastic ones last longer and cost less over time because they are more resistant to fire or other forms of vandalism. Lake Meredith has also closed certain areas for 9 months of the year to minimize the looting of archeological sites. At Saguaro National Park, the park managers closed many trails passing through archeological sites and revoked the permit of two horseback tour operators for refusing to keep horses on designated trails. The natural and cultural resources of our national parks are being threatened not only by sources external to the parks but also by activities originating within the parks’ borders. Without systemwide data on these threats to the parks’ resources, the Park Service is not fully equipped to meet its mission of preserving and protecting these resources. In times of austere budgets and multibillion-dollar needs, it is critical for the agency to have this information in order to identify and inventory the threats and set priorities for mitigating them so that the greatest threats can be addressed first. In our 1994 report on external threats to the parks’ resources, we recommended that the National Park Service revise its resource management planning system to (1) identify the number, types, and sources of the external threats; establish an inventory of threats; and set priorities for mitigating the threats; (2) prepare a project statement for each external threat describing the actions that can be taken to mitigate it; and (3) monitor the status of actions and revise them as needed. If the Park Service fully implements the spirit of our 1994 recommendations, it should improve its management of the parks’ internal threats. We therefore encourage the Park Service to complete this work. Not until this effort is completed will the Park Service be able to systematically identify, mitigate, and monitor internal threats to the parks’ resources. We provided a draft of this report to the Department of the Interior for its review and comment. We met with Park Service officials—including the Associate Director for Budget and Administration, the Deputy Associate Director for Natural Resources Stewardship and Science, and the Chief Archeologist—to obtain their comments. The officials generally agreed with the factual content of the report and provided several technical corrections to it, which have been incorporated as appropriate. The Park Service stated that it would not implement the recommendations cited from our 1994 report. However, we continue to believe that this information, or data similar to it, is necessary on a systemwide level to meet the Park Service’s mission of preserving and protecting resources. Park Service officials stated that obtaining an inventory of and information on the condition of the parks’ resources was a greater priority for the agency than tracking the number and types of threats to the parks’ resources, as our previous report recommended. They said that headquarters has the necessary systemwide information to make decisions but added that better data on the condition of resources are needed to allow the park managers to better identify the most threatened resources. They stated that the Park Service is trying to develop a better inventory and monitor the condition of resources as staffing and funding allow. Park Service officials also cited a number of reasons why implementing our past recommendations to improve the resource management planning system’s information on threats is no longer appropriate. Their reasons included the implementation of the Government Performance and Results Act, which requires a new mechanism for setting priorities and evaluating progress; the Park Service-wide budget database that is used to allocate funds to the parks; the existing databases that provide information on resources and workload; and the decentralization of the Park Service, which delegates authority to the park superintendents to determine what information is needed to manage their parks. We continue to believe that information on threats to resources, gathered on a systemwide basis, would be helpful to set priorities so that the greatest threats can be addressed first. The Park Service’s guidelines for resource management plans emphasize the need to know about the condition of resources as well as threats to their preservation. This knowledge includes the nature, severity, and sources of the major threats to the parks’ resources. We believe that knowing more about both internal and external threats is necessary for any park having significant cultural and natural resources and is important in any systemwide planning or allocation of funds to investigate or mitigate such threats. We agree that the number and types of threats are not the only information needed for decision-making and have added statements to the report to describe the Park Service’s efforts to gather data on the condition of resources. In addition, the Park Service commented that a mere count and compilation of threats to resources would not be useful. However, our suggestion is intended to go beyond a surface-level count and to use the resource management plan (or other vehicle) to delineate the types, sources, priorities, and mitigation actions needed to address the threats on a national basis. We believe that the Park Service’s comment that it needs a more complete resource inventory and more complete data on resources’ condition is consistent with our suggestion. As agreed with your office, we conducted case studies of eight parks because we had determined at Park Service headquarters that no database of internal threats existed centrally or at individual parks. At each park, we interviewed the managers, asking them to identify the types of internal threats to the park’s natural and cultural resources and indicate how well these threats were documented. We also asked the managers to assess the extent of the damage caused by the threats, identify trends in the threats, and indicate what actions were being taken to mitigate the threats. Whenever possible, we obtained copies of any studies or other documentation on which their answers were based. Given an open-ended opportunity to identify threats, a number of managers listed limitations on funding, staffing, and resource knowledge among the top threats to their parks. For example, the park managers we visited indicated that insufficient funds for annual personnel cost increases diminished their ability to address threats to resources. Although we did not minimize the importance of funding and staffing limitations in developing this report, we did not consider them as direct threats to the resources described in appendix I. These indirect threats are listed in appendix II. We performed our review from August 1995 through July 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees and Members of Congress; the Secretary of the Interior; the Director, National Park Service; and other interested parties. We will make copies available to others on request. Please call me at (202) 512-3841 if you or your staff have any questions. Major contributors to this report are listed in appendix III. On the basis of our analysis of the data, we determined that the following threats affect cultural and natural resources directly. Threats in the three other categories of staffing, funding, and resource knowledge are listed for the eight parks in appendix II. Minute Man National Historical Park (continued) Minute Man National Historical Park (continued) In addition to the direct threats to natural and cultural resources listed in appendix I, park managers of these resources also cited the following indirect threats that, in their opinion, significantly affected their ability to identify, assess, and mitigate direct threats to resources. Brent L. Hutchison Paul E. Staley, Jr. Stanley G. Stenersen The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed internal threats to the national parks' resources, focusing on the: (1) National Park Service's (NPS) information on the number and types of internal threats; (2) damage these threats have caused; (3) change in the severity of these threats over the past decade; and (4) NPS actions to mitigate these threats. GAO found that: (1) because NPS does not have a national inventory of internal threats to the park system, it is not fully equipped to meet its mission of preserving and protecting park resources; (2) park managers at the eight parks studied have identified 127 internal threats to their parks' natural and cultural resources; (3) most of these threats are due to the impact of private inholdings or commercial development within the parks, the impact of nonnative wildlife or plants, damage caused by illegal activities, increased visitation, and unintended adverse effects of management actions; (4) park managers believe the parks' most serious threats are caused by shortages in staffing, funding, and resource knowledge; (5) 82 percent of the internal threats have already caused more than minor damage, and cultural or archeological resources have suffered more permanent damage than natural resources in many parks; (6) 61 percent of internal threats, particularly those from increased visitation and serious fires, have worsened over the past decade, 27 percent have stayed about the same, and 11 percent have diminished; (7) park managers lack baseline data needed to judge trends in the severity of internal threats; and (8) some parks are closing trails to reduce erosion, installing more rugged equipment to reduce vandalism, revoking uncooperative operators' permits, and posting signs to inform visitors of the damage from their inappropriate activities.
You are an expert at summarizing long articles. Proceed to summarize the following text: As the primary federal agency responsible for providing security services to about 9,500 federal facilities (a majority of which are GSA-held or leased), FPS, among other things, enforces federal laws and regulations aimed at protecting federal properties and the persons on such property and investigates offenses against these buildings and persons. In conducting its mission, the agency provides two types of activities: (1) physical security and (2) law enforcement activities. As part of its physical security activities, the FPS workforce conducts facility security assessments, which consist of identifying and assessing threats to and vulnerabilities of a facility, as well as identifying countermeasures (e.g., security equipment) best suited to secure the facility. The agency’s law enforcement activities include proactively patrolling facilities, responding to incidents, and conducting criminal investigations, among other things. (See app. II for a list of activities FPS performs). To carry out these activities in fiscal year 2015, FPS maintained a workforce of 1,371 full-time equivalents (FTEs) at its headquarters and in its 11 geographic regions (see fig. 1 below). FPS’s Plan and staffing model focus on this federal workforce. This workforce consists of 1,007 law enforcement staff (inspectors, criminal investigators, and special agents) performing physical security and law enforcement activities and 364 non-law enforcement staff providing mission support. FPS also manages and oversees approximately 13,500 PSOs (i.e., contract guards) posted at federal facilities. These PSOs have responsibility for controlling access to facilities; conducting screening at access points to prevent the entry of prohibited items, such as weapons and explosives; responding to emergency situations involving facility safety and security; and performing other duties. FPS funds its operations by collecting security fees from federal agencies that use FPS for facility protection. FPS collects a basic security fee of $0.74 per square foot and an oversight fee to fund FPS for direct and indirect costs associated with providing building- or agency-specific security. The oversight fee is an additional 6 percent of the costs for providing security services to a building or agency. FPS anticipates collecting about $336.5 million in operating revenues from security fees charged to federal agencies in fiscal year 2016. Over the years, we have reported on FPS’s workforce-planning efforts and challenges. For instance, in July 2009, we found that FPS faced challenges with hiring and training new staff and did not have a strategic human capital plan to guide its workforce-planning efforts. We recommended that FPS take a strategic approach to managing its staffing resources, including developing a human capital plan to better manage its workforce needs. In October 2012, FPS implemented our recommendation and issued an Interim Strategic Human Capital Plan. In June 2010, we also identified several potential challenges that FPS may face with obtaining the staffing needed to adequately protect federal facilities, including funding challenges, difficulties in hiring inspectors, and training backlogs. Since 2011, Congress has required FPS to submit a strategic human capital plan that aligns fee collections to workforce requirements based on current threat assessments. To meet this requirement, FPS tasked DHS’s Homeland Security Systems Engineering and Development Institute (SEDI) to analyze FPS’s current organizational structure, position allocations, and assignments of personnel to help prepare a strategic human capital plan. In April 2012, SEDI developed a staffing model to estimate the size and composition of the workforce FPS needs to meet its facility protection mission. SEDI also helped develop FPS’s Plan, which identifies human capital strategies FPS intends to implement. The Plan states that the strategies will help the agency hire and retain people with the skills needed to carry out its mission. Since 2012, FPS has updated the staffing model and the Plan several times (see fig. 2). FPS last updated the staffing model in August 2013 and the Plan in February 2015. As FPS’s parent organization, NPPD has responsibility for managing and overseeing FPS’s human capital efforts. For example, NPPD has responsibility for recruiting and hiring FPS employees and providing guidance on other human capital services, such as training. In August 2015, NPPD proposed restructuring its organization to improve its management and operations. In December 2015, NPPD finalized a Human Capital Strategic Plan for fiscal years 2016 through 2020, which identified the overarching human capital goals and objectives for all NPPD component agencies, including FPS. In January 2016, NPPD also finalized a complementary operational plan that provides a road map of the actions NPPD plans to take in fiscal year 2016 to meet the goals established in its human capital plan. FPS’s Plan and related human capital planning efforts generally align with four of the five key principles for strategic workforce planning that we identified. Specifically, we found that FPS developed its Plan consistent with the first four principles described in table 1 below. FPS’s efforts to develop the Plan and take the actions described in table 1 show a marked improvement from 2009 when we found that that FPS did not have a human capital plan. However, in this review we found that FPS has not fully developed its human capital performance measures, which is the fifth key principle for strategic workforce planning. In our 2003 report on key principles for strategic workforce planning, we found that federal agencies’ use of all five key principles can contribute to effective strategic workforce planning. FPS officials told us that they intend to further develop performance measures in the future. They also told us that they are in the process of implementing the Plan and continue to review and refine strategies described in the Plan to meet the agency’s needs as they change. What is strategic workforce planning? Strategic workforce planning, also called human capital planning, is a systematic process that focuses on developing long-term strategies for acquiring, developing, and retaining an organization’s workforce to meet its mission. Agencies may outline strategies— the programs, policies, and processes that agencies use to build and sustain their workforces—in a human capital plan or through other human capital planning efforts. FPS sought input from key stakeholders when it developed and implemented the Plan and its human capital strategies. As noted in our 2003 report, involving top management can help set the overall direction of the agency’s workforce planning and soliciting employee input on workforce planning can help an agency better understand human capital needs and identify ways to improve human capital strategies. We found that FPS solicited input from its management and employees, NPPD, and external stakeholders. This input helped inform the Plan’s contents. FPS management and employees: FPS officials provided several examples of how FPS senior executives and employees provided input into the Plan. For example, the officials said that the senior executives set the strategic direction for FPS’s Plan and related efforts. Furthermore, the senior executives meet regularly to discuss broad human capital issues, such as actions the agency can take to ensure that its workforce can address future needs. In addition, FPS officials said that they administered surveys, held working groups, and conducted interviews with their employees to identify specific human capital issues, which helped shape the contents of the Plan. For example, FPS established working groups to help develop employee performance work plans. As discussed below, these performance work plans identify critical core competencies and associated performance standards for each position. These working groups consisted of regional directors, area commanders, and other regional staff. Furthermore, the Plan states that FPS intends to continue to involve employees and obtain their feedback as the agency implements the Plan. NPPD: NPPD is responsible for providing human capital services (e.g., recruiting, hiring) on behalf of FPS. FPS obtained input from NPPD when developing and implementing the Plan and its related efforts. For example, FPS and NPPD officials explained that NPPD officials participated in various working groups to develop strategies identified in the Plan. External stakeholders: While FPS officials told us they did not directly solicit input from any external stakeholder on the Plan, officials said they solicit feedback from these stakeholders on FPS’s services, which they used to inform the contents of the Plan. FPS interacts with a number of external stakeholders. For example, FPS is responsible for protecting all GSA-held or leased facilities, making GSA a key customer and important stakeholder. According to FPS officials, FPS interacts with GSA to ensure a coordinated effort for the protection of federal facilities. According to officials, FPS also works closely with entities such as the U.S. Marshals Service and Administrative Office of the U.S. Courts to provide coordinated protection at U.S. courthouses and the Social Security Administration to understand the threat environment and additional protection measures that can mitigate incidents. FPS officials told us that they solicit stakeholder feedback through continuous discussions and annual surveys on the services FPS provides. FPS has identified and standardized skills and competencies its staff need to carry out its activities and continues to work on developing this area. According to our 2003 report, determining an agency’s critical skills and competencies is essential to ensure that employees have the necessary skill sets to meet the agency’s needs. FPS has identified needed skills and competencies in the documents described below. FPS continues to finalize some of these documents. Performance work plans: FPS officials told us that in fiscal year 2015, they standardized performance work plans for the majority of its mission critical positions, such as inspectors and area commanders. These plans identify critical core competencies and associated performance standards for each position. For example, the core competencies for an inspector include skills in customer service (e.g., working with the GSA to understand its needs), representing the agency, and teamwork and cooperation, as well as technical proficiency. FPS uses the competencies in these work plans to systematically assess employees’ performance. Officials told us that they have efforts in progress to complete the performance work plans for other mission-critical and support positions. Career and professional development guide: FPS plans to complete a career and professional development guide, which it expects to finalize in 2016, describing position-specific competencies, skills, and tasks. FPS officials explained that the guide aims to help FPS direct and track employee training—including required annual training—and professional development to improve employee performance. Position descriptions: FPS developed position descriptions for its employees to clarify the role of specific positions, by listing the major job duties, skills, and other requirements (e.g., security clearance) needed for the position. For example, according to the position description for the criminal investigator position, the duties of a senior- level investigator include conducting complex investigations that require extensive coordination and planning. According to the Plan, FPS intends to update the position descriptions. FPS officials told us that they regularly work with NPPD to update the position descriptions so that they reflect changes to position responsibilities and requirements. How did the Federal Protective Service (FPS) organize the human capital strategies described in its Strategic Human Capital Plan? FPS’s Strategic Human Capital Plan (the Plan) organized its human capital strategies into five broad categories, which are the same categories described in the Office of Personnel Management’s Human Capital Assessment and Accountability Framework. Each category contains several strategies, some of which are described below. FPS’s Plan describes five categories, each of which contains several human capital strategies (i.e., programs, policies, and processes) (see sidebar). According to the Plan, the strategies will help FPS build and sustain a workforce that can carry out its mission. For example, under the talent management category and related recruiting and hiring strategies, the Plan states that FPS intended to hire 109 employees in fiscal year 2015. The Plan also states that in addition to conducting recruitment fairs, FPS intends to make its recruiting strategy more cost-effective by leveraging internal and external partnerships to attract talent. Under the training strategy, the Plan describes a training program, which identifies courses that aim to equip FPS staff to assess, mitigate, and respond to current and emerging threats to federal facilities. In addition, FPS officials provided examples of how they have tailored their strategies to address identified gaps and needs. For instance, FPS adjusted its training program based on the results of a preliminary assessment that identified gaps in training, according to FPS officials. Specifically, FPS developed leadership, physical security, contracting officer representative, and security technology courses to fill identified gaps in employee training. FPS officials told us that once they finalize and implement the performance work plans and career and professional development guide, they plan to determine whether they have agency- wide gaps in skills and competencies and further refine the training program to address these gaps. Furthermore, as discussed in greater detail later in this report, FPS considered adapting some of its human capital planning decisions based on gaps and needs identified from the staffing model and other management tools. As our 2003 report found, developing strategies tailored to address gaps, human capital needs, and critical skills and competencies that need attention helps create a road map for an agency to move from the current to the future workforce needed to achieve program goals. The Plan identified some educational and administrative actions to build the capability needed to support its human capital strategies. We noted in our 2003 report that such actions can help ensure that the strategies are effectively, consistently, and fairly implemented. For example, the Plan states that FPS intends to educate employees on new human capital strategies. The Plan also states that FPS intends to develop tools and issue guidelines to help managers administer various strategies, such as a toolkit for managers with tips and guidance to help them retain staff. Furthermore, FPS and regional officials also use administrative authorities that can help them carry out hiring strategies identified in the Plan. For example, Veterans Recruitment Appointment (VRA) authority allows federal agencies, such as FPS, to make excepted appointments of eligible veterans to specified positions without competition. According to FPS officials, these hiring authorities help ensure that FPS can leverage various candidate pools to recruit and retain qualified personnel. An FPS regional director explained that the VRA hiring authority allowed him to fill the positions he needed in his region. NPPD’s planned reorganization aims to improve the administration of human capital efforts that support FPS. Specifically, NPPD and FPS officials explained that as a part of its reorganization, NPPD intends to place its human capital staff in FPS’s headquarters office. According to this official, collocating NPPD human capital and FPS staff aims to improve the administration of recruiting and hiring because it will allow NPPD to more effectively and quickly meet FPS’s human capital needs and priorities in these areas. FPS has taken initial steps to develop performance measures for some, but not all, strategies discussed in the Plan. Specifically, the Plan identified performance measures for strategies that fall under one of the five broad categories—talent management—FPS used to organize its strategies. See figure 3. However, FPS did not identify measures for strategies that fall under the other four broad categories, such as leadership and knowledge management and building a results-oriented performance culture. FPS officials told us that they did not identify performance measures beyond those related to talent management in part because they were waiting for NPPD to finalize a human capital plan that would be applicable to FPS and that would contain measures for the other categories. They also noted that, due to resource constraints, they focused more on implementing strategies described in the Plan than on developing additional performance measures. Furthermore, FPS did not identify targets for the performance measures identified in the Plan. For example, the Plan identifies “quality and effectiveness of training” and “attrition rates” as measures but FPS has not identified associated targets for them. For example, FPS did not identify a desired target for the “attrition rate” measure (e.g., reduce new hires’ attrition rate by 3 percentage points over fiscal years 2017 through 2020). In our prior work, we have found that successful performance measures contain targets, which can help agency managers evaluate progress by comparing actual results to projected performance. In addition, the Plan does not explicitly show how the performance measures and associated strategies link to FPS’s human capital goals. For example, as shown in figure 3, the Plan does not clearly link the “attrition rate” performance measure that is associated with FPS’s retention strategy to one or more of FPS’s five human capital goals. We have previously found that explicitly linking performance measures to goals and clearly communicating the linkage also helps make performance measures successful because the linkages can help agencies determine whether they are achieving their human capital and agency goals. FPS has taken some initial steps to develop targets and linkages but, as was the case with developing additional performance measures, did not complete these steps in anticipation that NPPD would finalize its own human capital plan. For example, FPS began to collect data to help identify appropriate targets and continues to work on this effort, according to officials. Additionally, when developing the Plan, FPS developed a draft document that shows the link between the identified performance measures and the agency’s human capital goals. For example, FPS linked its “attrition rate” performance measure to the agency’s third human capital goal, which is to provide FPS with the tools, mechanisms, and processes to improve workforce effectiveness, agility, and retention. After the completion of our audit work in February 2016, NPPD and FPS officials provided us with NPPD’s strategic human capital plan and complementary operational plan, which NPPD finalized in December 2015 and January 2016, respectively. At this time, NPPD’s plans do not include performance measures specific to FPS. An NPPD official who played a key role in developing this plan said that, in the future, an FPS operational plan that is aligned with NPPD’s human capital plan and specifically reflects FPS’s human capital needs and strategies will be developed. Further, NPPD and FPS officials told us that they will work together to develop human capital performance measures relevant to FPS. However, the officials’ plans are not clear because they have not yet established time frames for addressing the issues we identified on performance measures. According to our key workforce-planning principles, agencies should establish performance measures to evaluate an agency’s progress toward reaching human capital goals and the contribution of human capital activities toward achieving agency goals. Establishing performance measures before an agency starts to implement its strategies can help agency officials evaluate the human capital plan. If FPS and NPPD do not develop performance measures, including targets and linkages to goals, in a timely manner, neither agency can accurately assess FPS’s progress in achieving its human capital goals or its agency goal of sustaining a valued, skilled, and agile workforce or the contribution of its strategies toward achieving these goals. Consequently, neither NPPD nor FPS will know the extent to which the Plan and related strategies are helping fulfill its mission of protecting federal facilities and their occupants. Furthermore, it will be difficult for stakeholders—such as Congress and the public—to hold FPS accountable for achieving its goals. FPS issued its latest staffing model in August 2013, which identified the number and composition of FTEs the agency needs to meet its mission, based on various data inputs, assumptions, and analyses. We compared the design of this model to four key practices we identified for the design of staffing models and found that FPS’s model reflects three of these four key practices (see table 2). Specifically, we found that FPS designed its staffing model to include (1) work activities performed by FPS employees and the frequency and number of hours it takes to perform them; (2) risk factors that affect the agency’s operational activities, such as the security level and quantities of facilities; and (3) input from key stakeholders. We found that while FPS officials took some steps to ensure the quality of data used in the model, they did not document a process for doing so. A staffing model that reflects all four key practices can enable FPS officials to make informed decisions on workforce planning with reliable estimates. Work activities, frequency, and time required to conduct work activities: We found that FPS’s staffing model includes data commonly used in workforce analyses, such as data on work activities, and the frequency and number of hours to perform them. Incorporating these types of data into the staffing model helps estimate the number of staff needed to carry out an agency’s activities, according to a key practice we identified. SEDI officials reviewed documentation (e.g., relevant laws and regulations, FPS policies) to identify all FPS mission and mission support activities (referred to as an activities taxonomy)—and the frequency with which the identified activities are performed. SEDI officials identified about 200 total activities and associated tasks. As discussed in detail below, SEDI officials consulted with key stakeholders to estimate the required time to perform all mission-related work activities. SEDI officials used information provided by these stakeholders because time constraints precluded it from conducting real-time studies, according to an FPS official. SEDI officials also calculated average productive labor hours to populate the staffing model (1,548 hours for non-supervisory physical security inspectors and 1,987 hours for criminal investigators) based on assumptions about staff’s annual leave, sick leave, training requirements, travel (for training), and time devoted to other tasks (e.g., collateral duties). For instance, the productive labor hours used in the model assumes that staff on average use 50 percent of their sick leave each year. To estimate the number of staff FPS needed in fiscal year 2013 (1,870 FTEs), SEDI officials used the data discussed above to calculate the estimated total number of FTEs required to perform each activity. Figure 4 provides an example of the steps taken to calculate the FTEs needed for one activity—conducting security assessments at a level 4 facility. SEDI officials then aggregated the FTEs for each activity to identify the total estimated FTEs that FPS needs to carry out its mission. Risk factors: FPS officials incorporated operational risk factors in its model, including the different security levels of federal facilities. We previously found that commonly used industry practices for staffing models specific to law enforcement and physical security include identifying operational risk factors, such as the security level of facilities and posts to be secured or protected and identifying tasks and time it takes to conduct those activities. A federal facility’s security risk level determines the frequency with which FPS must complete a facility’s security assessment. The model includes annual targets for completing facility security assessments by facility security level. For example, FPS officials estimated that it would complete 509 facility security assessments for security-level 4 federal facilities in fiscal year 2013 and included these data in the model. In identifying key practices for the design of staffing models, we found that accounting for these operational risk factors helps determine the number of staff and positions needed to mitigate potential threats to federal facilities. Key stakeholders: In designing the model, SEDI officials consulted with key stakeholders and subject-matter experts, including FPS headquarters officials, and some regional directors and managers. According to an official, SEDI relied on the subject matter experts to estimate the number of hours it should take to perform FPS operational activities. A SEDI official also told us that they used these experts because FPS staff perform unique activities and therefore, no benchmarks exist for how long it takes to perform many of the work activities, such as facility security assessments. FPS officials also told us that they involved NPPD human capital officials to help identify assumptions, such as leave estimates, that were used to calculate productive labor hours. We have previously found that involving stakeholders and subject matter experts when designing a staffing model can help an agency ensure that the model reflects operating conditions and meets user needs. Data quality: FPS officials took steps to ensure the quality of the data used in the model. We have defined data quality as the use of relevant data from reliable internal and/or external sources based on the identified information requirements. To help ensure data quality, FPS officials told us that SEDI officials questioned subject matter experts to obtain work activity hour estimates for performing some work activities and to understand what the estimates included and then revised them as needed to improve precision. For example, if subject matter experts included travel time as part of the estimated time to perform a facility security assessment, then SEDI officials excluded the travel time from the original estimate and made it a separate work activity with estimated time for completing it. Additionally, FPS officials told us that they compared some work activity hour estimates from the staffing model to the actual number of hours it takes FPS staff to perform those activities from their Activity- Based Costing Model to identify differences and make corrections, when needed, to reflect actual conditions. Further, FPS officials told us that they regularly reviewed and provided feedback on SEDI’s taxonomy and other data collection efforts to identify all FPS work activities and estimated work hours as well as the underlying assumptions used to develop some estimates, such as assumptions related to productive labor hours. A SEDI official also told us that FPS Operations and some regional officials reviewed the estimated hours required to complete some work activities in the staffing model. Although FPS officials took some steps to ensure the quality of data provided by subject matter experts, they did not document the agency’s process for ensuring data quality at the time FPS developed the model, and we could not assess the reliability of data used in the model. We found that some FPS staff questioned the quality and reasonableness of the data in the model, particularly on work hour estimates to complete some activities. Specifically, selected FPS regional staff we spoke with told us that some work hour estimates did not reflect their experience or actual operating conditions. For example, all nine area commanders we spoke with stated that the estimated time to complete a risk assessment of federal facilities with a facility security level 3 (about 60 hours) was low. The area commanders said that it takes about 80 to120 hours on average because, similar to level 4 facilities, they need to interview multiple tenants for the risk assessments. FPS headquarters officials told us that they had estimated a range of 60 to 80 hours for conducting assessments of federal facilities with a facility security level 3, depending on the number of federal agencies and clients in the federal facilities, but they used the 60-hour estimate in the model as the nationwide average time to conduct those assessments. FPS officials told us that when they update the model, they plan to validate work hour estimates they obtained from subject matter experts and use data from some new technologies, such as the Modified Infrastructure Survey Tool (MIST), to better reflect actual operating conditions. FPS headquarters officials also told us that the training hours estimates in the model represent training requirements at the time of developing the model and that when they update the model, they plan to change the training hours used in the model to reflect changes in training requirements. We found that FPS uses the staffing model in conjunction with other management tools, professional judgement, and institutional knowledge to help inform human capital planning and budget requirements, as described below. Human capital planning decisions: An FPS official told us that FPS uses the staffing model in conjunction with other management tools, such as the Activity-Based Costing Model, to help make staffing and human capital planning decisions. In particular, using the staffing and Activity-Based Costing models, FPS found that inspectors spent less time than was predicted in some activities. For example, FPS officials said that FPS found that inspectors spent less time than predicted by the staffing model on overseeing countermeasures services at agencies. An official said that FPS used this information to evaluate and consider making changes to inspectors’ workloads and staffing levels. Budget requirements: In 2014 and 2015—in response to international security events (e.g., shootings at the Canadian Parliament and in Paris)—the Secretary of DHS instructed FPS to enhance its presence and security at federal facilities for short periods of time. An FPS official told us that FPS used the model to understand the impact of the additional facility security responsibilities on its staff’s daily facility- protection workload. According to the official, analyses from the staffing model, other management tools, and conversations with regional office staff, showed that FPS needed additional staff resources to maintain its law enforcement staff’s daily workload while at the same time providing enhanced security operations. As a result of this analysis, in July 2015, DHS notified federal agencies that it would increase its basic security fee from $0.74 to $0.78 and its oversight fee from 6 percent to 8 percent in fiscal year 2017. According to a DHS memorandum sent to agencies using FPS services, the fee increases, combined with internal efficiencies, will allow the agency to sustain essential security operations and maintain the agency’s capacity to rapidly surge personnel during increasingly more common periods of heightened vulnerability in fiscal year 2017. While FPS continues to use the model for the purposes described above, it does not reflect some changes in FPS’s operating conditions that have occurred since the model’s last update in August 2013. For example, this staffing model does not reflect FPS’s plan to perform about 600 more facility security assessments for level 3 and 4 facilities in fiscal year 2014 than it did in fiscal year 2013. We calculated that if FPS updated the August 2013 model to reflect these additional facility security assessments, FPS would have needed about 37 more FTEs in fiscal year 2014, each of which would have completed an average of about 17 of the additional assessments. FPS officials acknowledged that the number of facility security assessments it plans to complete can change and that other operating conditions, such as the number of federal facilities FPS is responsible for protecting, can change regularly. FPS’s operating conditions can also change when security or agency needs change. For example, some GSA facilities are becoming more technologically advanced. To address security needs at those facilities, FPS officials told us that in conjunction with the NPPD’s Office of Cybersecurity and Communications, they plan to execute more of their protection responsibilities as they relate to the nexus of cybersecurity and physical security. Furthermore, NPPD’s planned reorganization may result in changes to FPS’s activities. Separately, FPS’s operations may change for an extended period of time in response to unexpected events, such as when threat levels change. As noted above, FPS officials said that while they would like to update the model to reflect changes in operations, they have not yet done so because of limited staff resources. However, at the completion of our audit work in February 2016, FPS officials told us that given the planned NPPD reorganization, NPPD has not yet determined whether FPS will continue to have responsibility for updating the staffing model, whether this responsibility will shift to NPPD, or whether it will become a shared responsibility. Nonetheless, FPS and NPPD have no specific time frames for updating the model. Standards for internal control in the federal government and associated guidance state that managers need timely analytical information to help make management decisions. Furthermore, we have found in our prior work that completing staffing models and regularly updating them in a timely manner can help support agencies’ activities and decision making. Without a clear plan and time frames for updating the staffing model to reflect regular and unexpected changes in operating conditions, NPPD and FPS will have limited assurance of the accuracy of the model’s estimates of the number and composition of staff FPS needs to protect federal facilities and their decisions regarding the FPS workforce. Although FPS took steps to ensure data quality when it developed the August 2013 model, FPS does not have a documented process for ensuring data quality when it updates the model to account for changes in operating conditions in the future. As such, it is not clear whether the model will use quality data that reflect current operating conditions. Standards for internal control in the federal government and guidance we have developed on assessing the reliability of computer-processed data state that agencies should use a process to help ensure data quality. Also, the internal control standards emphasize the importance of control activities such as procedures for achieving an entity’s objectives. Documented processes on data quality—such as guidance on how to collect data, validate assumptions underlying the data, and perform sensitivity analyses to assess the assumptions—can help ensure that data used in the model are reasonably free from error and bias and provide greater assurance to decision makers that they are using reliable and sound information that is produced from the model. In June 2010, in making preliminary observations about FPS’s workforce-planning efforts, we emphasized the importance of taking steps to ensure the quality of data used. Because FPS officials did not document a process for assuring data quality during the development of the model and without relevant guidance, FPS may not be able to ensure that future updates to its staffing model will provide accurate estimates of staffing needs, putting FPS at risk of not fully understanding whether it has the staff it needs to perform its mission. To carry out its mission of protecting federal facilities and their occupants against potential terrorist attacks and other violent acts, FPS must ensure that it has the right people with the right skills in the right positions, at the right time. Over the years, however, we have identified several workforce- related challenges facing FPS, such as the absence of a strategy to manage FPS’s current and future workforce needs. The completion of FPS’s first strategic human capital plan and staffing model, therefore, represents significant progress. Moreover, FPS’s development of both the Plan and model largely align with recognized key workforce-planning principles and staffing model practices. While FPS has taken a number of positive steps to strategically manage its workforce, we found that FPS does not have assurance that its efforts will achieve its stated goals. FPS has not fully developed human capital performance measures, and while both NPPD and FPS plan on taking additional action in this area, future progress is uncertain because the NPPD and FPS have not established a time frame for developing additional measures. Until FPS and NPPD develop performance measures with targets that clearly align with FPS’s stated human capital goals, it will be difficult to determine whether FPS is on track to meet its goals and mission, or needs to make adjustments. Furthermore, FPS’s current staffing model has not been updated since August 2013. Until FPS develops a plan and timeline for updating the model regularly and for unexpected changes in operating conditions that last for an extended period of time, FPS will have limited assurance on the model’s estimates of the number of staff it needs to protect federal facilities. Finally, because FPS did not document a process for ensuring data quality when it developed the model, it is not clear whether future updates to the model will accurately reflect changes in operating conditions. Without documented guidance that describes the process FPS will use to ensure data quality, FPS may not be able to ensure that its staffing model will provide accurate estimates of staffing needs. As FPS’s parent organization, NPPD has a critical role to play in managing and overseeing FPS’s human capital efforts. Accordingly, NPPD and FPS need to work together to ensure that they have the staff they need to perform their facility protection mission. To help FPS enhance its strategic human capital planning efforts, we recommend that the Secretary of Homeland Security direct the Under Secretary of NPPD to work with the Director of FPS to take the following three actions: identify time frames for developing human capital performance measures with targets that are explicitly aligned to FPS’s stated human capital goals, establish a plan and time frames for updating FPS’s staffing model regularly and for unexpected changes in operating conditions, and develop and document guidance on the process FPS will use to ensure the quality of its staffing model data, such as guidance on how to collect data, validate assumptions, and perform sensitivity analyses to assess the assumptions. We provided a draft of this report to DHS for comment. DHS concurred with our recommendations and outlined steps it plans to take to address them. DHS’s written comments are reproduced in appendix III. DHS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of the Department of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found at the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The objectives of our report were to evaluate the Federal Protective Service’s (FPS) human capital planning efforts, including its Strategic Human Capital Plan (the Plan) and staffing model. Specifically, we examined (1) whether FPS’s Plan and related human capital planning efforts align with key strategic key strategic workforce-planning principles and (2) how FPS designed and uses its staffing model to help ensure that it has the workforce it needs to meet its mission. To address both objectives, we reviewed relevant laws and regulations, documents from FPS and the National Protection and Programs Directorate (NPPD), and our prior work related to workforce planning and human capital management. We also interviewed or obtained information from officials at FPS, NPPD, and General Services Administration (GSA). To determine regional staff’s involvement in developing the Plan and staffing model, we obtained information from and interviewed FPS regional directors and nine randomly selected area commanders responsible for facility security in 3 of FPS’s 11 regions. We judgmentally selected the 3 regions—Regions 7 (Greater Southwest Region), 10 (Northwest/Arctic Region), and 11 (National Capital Region)—to obtain variation in the number of FPS-protected facilities and full-time equivalent (FTE) employees; the number of facilities per FTE; geographic size (in terms of square miles); number of square miles per facility in the region; geographic location (i.e., east, central, and west locations); and whether DHS’s Homeland Security Systems Engineering and Development Institute (SEDI) visited the region when it developed the staffing model. Because we judgmentally selected the FPS regions, our results are not generalizable to all of FPS. We also interviewed an official from the Federal Law Enforcement Officers Association, which is a nonprofit professional association representing federal law enforcement officers, and the International Association of Chiefs of Police to obtain their perspective on workforce planning and staffing models. To examine whether FPS’s Plan and related human capital planning efforts align with key strategic workforce-planning principles, we reviewed and assessed FPS’s fiscal year 2015 Plan and related efforts against five key strategic workforce-planning principles. The five key principles include: 1. involving top management, employees, and other stakeholders in developing, communicating, and implementing a strategic workforce plan; 2. determining critical skills and competencies needed for employees; 3. developing strategies tailored to address gaps and needs; 4. building the organizational capability needed to support human capital 5. developing performance measures to evaluate progress toward reaching human capital or agency goals. We obtained these principles from our 2003 report on key principles for effective strategic workforce planning. We compared these principles with guidelines in the Office of Personnel Management’s (OPM) Human Capital Assessment and Accountability Framework (HCAAF) that apply across the federal government and determined that the principles we developed are generally consistent with OPM’s guidelines. FPS officials also told us that they based their Plan on the HCAAF guidelines. The five key strategic workforce-planning principles can enhance the effectiveness of an agency’s strategic workforce-planning and can help ensure that its strategic workforce-planning process appropriately addresses an agency’s human capital challenges, goals, and mission. We also conducted interviews with FPS officials to obtain information on whether FPS’s Plan and related human capital planning efforts addressed key strategic workforce-planning principles. We interviewed an NPPD official and selected FPS regional staff, as mentioned above, to understand their involvement in FPS’s human capital planning efforts. We did not assess how FPS tailored each of its strategies to address human capital needs and gaps and critical skills and competencies that need attention. Rather, we asked FPS officials to provide examples of how their strategies addressed human capital needs and gaps. We also did not assess the effectiveness of the Plan because FPS is still in the process of implementing it. After the conclusion of our audit work in February 2016, we received NPPD’s Human Capital Strategic Plan and we reviewed it to compare to FPS’s Plan to the extent that time allowed. To assess the design and use of FPS’s staffing model, we reviewed FPS’s August 2013 model (the latest available) and relevant FPS documents, and interviewed FPS officials to better understand the process they followed to design the model and how they collected data used in the model. We evaluated FPS’s design of the model using standards for internal control in the federal government, our 2009 guidance on assessing the reliability of computer-processed data, and key practices we identified on the design of staffing models. We identified the key practices from our previous reports that discussed staffing models, discussions with a physical security industry association, and staff within our agency with workforce-planning expertise. We initially identified 11 key practices, but used four key practices to evaluate how FPS designed its staffing model. Three of the 11 practices were not yet applicable because FPS had not yet assigned staff to manage the staffing model, and we consolidated 8 of them because they had similar characteristics. For instance, we consolidated 3 practices on addressing data issues into one because all of them related to data quality. The four key practices call for: 1. incorporating work activities, frequency, and time required to conduct 2. incorporating risk factors; 3. involving key stakeholders; and 4. ensuring data quality to provide assurance that staffing estimates produced from the model are reliable. These four key practices help provide reasonable assurance that the design of the model will provide estimates to help management make staffing and other decisions consistent with an agency’s mission. As part of our review of FPS’s design of the staffing model, we also assessed the reliability of data FPS used in the model by reviewing available documentation, interviewing agency officials knowledgeable about the data, and examining data entries in the model for obvious errors in accuracy and completeness. FPS officials told us that some data discrepancies we found in our assessment did not significantly impact the estimated number and composition of staff needed to meet FPS’s mission. Given the large volume of data in the model, we did not verify this. We could not determine the reliability of data used in the model as FPS did not provide us with documentation on steps taken to ensure data quality, which is a key practice discussed more fully in the report. However, to examine the quality of selected data inputs (e.g., estimated time for completing certain work activities) in the model, we conducted semi-structured interviews with randomly selected area commanders in three regions, as mentioned above, to gauge the data’s reasonableness. We cannot conclude that all the data input in the model are reasonable, as we judgmentally selected some data inputs to verify. We also did not talk to subject matter experts to determine how they identified some data inputs, such as the number of hours the agency needed to complete work activities. Finally, to examine how FPS uses its staffing model, we reviewed FPS’s staffing analysis documents to understand FPS’s staffing levels and interviewed FPS headquarter officials. We evaluated FPS’s use of the staffing model using standards for internal control in the federal government and associated guidance. We did not verify whether the staffing model identified the optimal workforce FPS needs to effectively carry out its mission. We also did not review the size and composition of FPS’s workforce of Protective Security Officers (i.e., contract guards) because FPS did not include them in its staffing model. We conducted this performance audit from May 2015 to March 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Federal Protective Service (FPS) classifies its activities into five categories—primary activities, secondary activities, enabling activities, support activities, and supplementary activities. See below for a description of these categories and the activities FPS identified. Some activities have a number of associated tasks. Description of category Activities essential to the performance of FPS’s mission of protecting federal facilities, their occupants, and visitors. Law enforcement response Protective investigations Facility Security Assessments (FSA) Protective Security Officer services Technical countermeasure services Law enforcement policing and patrol Critical incident and special security operations National Infrastructure Protection Plan (NIPP) Activities performed as a result of primary activities. The primary activity creates a need to perform the secondary activity. Secondary activities can be directly linked to the primary activities that they inform or enhance. Activities that help to sustain operations by providing a foundation of required capabilities. Enabling activities generally support a relatively broad set of primary and secondary activities. Activities that represent FPS services that extends beyond its core mission. In addition to the contact named above, Amelia Shachoy (Assistant Director), Roshni Davé, Joseph Franzwa, Geoffrey Hamilton, Delwen Jones, Jennifer Kim, Steven Lozano, Sara-Ann Moessbauer, Janice Morrison, Joshua Ormond, Malika Rice, and Rebecca Shea made key contributions to this report.
The federal security workforce plays a crucial role in meeting the growing challenges of protecting federal facilities. FPS, within the Department of Homeland Security (DHS), worked with NPPD to develop a staffing model and a Plan in 2013 and 2015 to help FPS manage its workforce. A 2015 Senate Appropriations Committee report included a provision for GAO to review the Plan . The committee also asked GAO to evaluate the staffing model. GAO examined (1) FPS's Plan and related human capital planning efforts and (2) how FPS designed and uses its staffing model. GAO assessed FPS's Plan and model to determine if they aligned with key workforce-planning principles and practices for designing staffing models. GAO identified these principles and practices from prior work and other sources. GAO also interviewed NPPD and FPS officials in headquarters and three regions selected to obtain regional variation such as in the number of FPS staff. The Federal Protective Service (FPS)—which protects about 9,500 federal facilities—developed a Strategic Human Capital Plan ( Plan ) and engaged in related efforts that generally align with most key principles GAO identified for strategic workforce planning. Specifically, FPS solicited input from key stakeholders, such as its employees and the National Protection and Programs Directorate (NPPD)—FPS's parent organization responsible for managing and overseeing FPS's human capital efforts; determined critical skills and competencies; developed human capital strategies (i.e., programs, policies, and processes) tailored to address identified gaps and needs in its workforce; and identified actions that build the organizational capability to support the strategies. However, FPS has not fully developed performance measures to evaluate progress toward goals, which is also a key principle for strategic workforce planning. For example, FPS has not identified performance measures for all of the Plan's strategies, has not included targets for the identified performance measures (e.g., a desired target for the “attrition rate” measure), and has not linked the measures to FPS's human capital goals. GAO's work on measuring program performance has found that targets and linkages are among the attributes of successful performance measures. FPS and NPPD officials said they plan on developing measures with targets and linkages but have not yet established time frames for completing these tasks. Without performance measures that have targets and linkages, it will be difficult for NPPD and FPS to assess whether the Plan and related efforts are helping achieve FPS's human capital goals and its facility protection mission. FPS designed its staffing model—which identifies the federal workforce the agency needs to meet its mission—consistent with most key practices GAO identified for the design of staffing models, and FPS uses the model to help make management decisions. Specifically, FPS's model includes: work activities and the time required to perform them; facility risk levels, which determine the frequency with which FPS must complete facility security assessments; and input from key stakeholders, including NPPD and some regional officials. FPS officials said they took steps, such as reviewing work hour estimates, to ensure the quality of data used in the model—another key practice. FPS currently uses the model to help make human capital planning and other management decisions, but NPPD and FPS have not identified time frames for updating the model since its last update in August 2013. Furthermore, FPS cannot assure data quality in future updates to the model because it has no documented process for ensuring data quality. Without time frames for updating the model and guidance on ensuring data quality, NPPD and FPS may not have accurate estimates of staffing needs to make management decisions. To improve FPS's human capital planning, GAO recommends that the Secretary of DHS direct NPPD and FPS to identify time frames for developing performance measures with targets that are explicitly aligned to FPS's goals, establish a plan and time frames for updating its staffing model, and develop and document guidance for ensuring the quality of staffing model data. DHS concurred with GAO's recommendations and outlined steps it plans to take to address them.
You are an expert at summarizing long articles. Proceed to summarize the following text: GSA administers the federal government’s SmartPay® purchase card program, which has been in existence since the late 1980s. The purchase card program was created as a way for agencies to streamline federal acquisition processes by providing a low-cost, efficient vehicle for obtaining goods and services directly from vendors. The purchase card can be used for simplified acquisitions, including micropurchases, as well as to place orders and make payments on contract activities. The FAR designated the purchase card as the preferred method of making micropurchases. In addition, part 13 of the FAR, “Simplified Acquisition Procedures,” establishes criteria for using purchase cards to place orders and make payments. Figure 1 shows the dramatic increase in purchase card use since the inception of the SmartPay® program. As shown in figure 1, during the 10-year period from fiscal year 1996 through 2006, acquisitions made using purchase cards increased almost fivefold—from $3 billion in fiscal year 1996 to $17.7 billion in fiscal year 2006. Figure 2 provides further information on the number of purchase cardholder accounts. As shown, the number of purchase cardholder accounts peaked in 2000 at more than 670,000, but since then the number of purchase cardholder accounts has steadily decreased to around 300,000. Cardholder (in thouand) As the contract administrator of the program, GSA contracts with five different commercial banks in order to provide purchase cards to federal employees. The five banks with purchase card contracts are (1) Bank of America, (2) Citibank, (3) Mellon Bank, (4) JPMorgan Chase, and (5) U.S. Bank. GSA also has created several tools, such as the Schedules Program, so that cardholders can take advantage of favorable pricing for goods and services. Oversight of the purchase card program is also the responsibility of OMB. OMB provides overall direction for governmentwide procurement policies, regulations, and procedures to promote economy, efficiency, and effectiveness in the acquisition processes. Specifically, in August 2005, OMB issued Appendix B to Circular No. A-123, Improving the Management of Government Charge Card Programs, that established minimum requirements and suggested best practices for government charge card programs. From July 1, 2005, through June 30, 2006, GSA reported that federal agencies purchased over $17 billion of goods and services using government purchase cards. Our analysis of transaction data provided by the five banks found that micropurchases represented 97 percent of purchase card transactions and accounted for almost 57 percent of the dollars expended. Using purchase cards for acquisitions and payments over the micropurchase limit of $2,500 represented about 3 percent of purchase transactions and accounted for more than 44 percent of the dollars spent from July 1, 2005, through June 30, 2006. Internal control weaknesses in agency purchase card programs exposed the federal government to fraudulent, improper, and abusive purchases and loss of assets. Our statistical testing of two key transaction-level controls over purchase card transactions over $50 from July 1, 2005, through June 30, 2006, found that both controls were ineffective. In aggregate, we estimated that 41 percent of purchase card transactions were not properly authorized or purchased goods or services were not properly received by an independent party (independent receipt and acceptance). We also estimated that 48 percent of purchases over the micropurchase threshold were either not properly authorized or independently received. Further, we found that agencies could not provide evidence that they had possession of, or could otherwise account for, 458 of 1,058 accountable and pilferable items. According to Standards for Internal Control in the Federal Government, internal control activities help ensure that management’s directives are carried out. The control activities should be effective and efficient in accomplishing the agency’s control objectives and should occur at all levels and functions of an agency. The controls include a wide range of activities, such as approvals, authorizations, verifications, reconciliations, performance reviews, and the production of records and documentation. For this audit, we tested those control activities that we considered to be key in creating a system that prevents and detects fraudulent, improper, and abusive purchase card activity. To this end, we tested whether (1) cardholders were properly authorized to make their purchases and (2) goods and services were independently received and accepted. As shown in table 1, we estimated that the overall failure rate for the attributes we tested was 41 percent, with failure rates of 15 percent for authorization and 34 percent for receipt and acceptance. Lack of proper authorization. As shown in table 1, 15 percent of all transactions failed proper authorization. According to Standards for Internal Control in the Federal Government, transactions and other significant events should be authorized and executed only by persons acting within the scope of their authority, as this is the principal means of assuring that only valid transactions to exchange, transfer, use, or commit resources and other events are initiated or entered into. To test authorization, we accepted as reasonable evidence various types of documentation, such as purchase requests or requisitions from a responsible official, e-mails, and other documents that identify an official government need, including blanket authorizations for routine purchases with subsequent review by an approving official. The lack of proper authorization occurred because (1) the cardholder failed to maintain sufficient documentation, (2) the agency’s policy did not require authorization, or (3) the agency lacked the internal controls and management oversight to identify purchases that were not authorized— increasing the risk that agency cardholders will misuse the purchase card. Failure to require cardholders to obtain appropriate authorization and lack of management oversight increase the risk that fraudulent, improper, and other abusive activity will occur without detection. Lack of independent receipt and acceptance. As depicted in table 1, our statistical sampling of executive agency purchase card transactions also found that 34 percent of transactions failed independent receipt and acceptance, that is, goods or services ordered and charged to a government purchase card account were not received by someone other than the cardholder. According to Standards for Internal Control in the Federal Government, the key duties and responsibilities need to be divided or segregated among different people to reduce the risk of error or fraud. Segregating duties entails separating the responsibilities for authorizing transactions, processing and recording them, reviewing the transactions, and handling related assets. The standards further state that no one individual should control all key aspects of a transaction or event. As evidence of independent receipt and acceptance, we accepted any signature or initials of someone other than the cardholder on the sales invoice, packing slip, bill of lading, or any other shipping or receiving document. We found that lack of documented, independent receipt extended to all types of purchases, including pilferable items such as laptop computers. Independent receipt and acceptance helps provide assurance that purchased items are only acquired for legitimate government need and not for personal use. Although we did not test the same number of attributes as in previous audits of specific agencies’ purchase card programs, for those attributes we tested, the estimated governmentwide failure rates shown in this report are lower than the failure rates we have previously reported for certain individual agencies. Table 2 provides failure rates from our prior work related to proper approval and independent receipt and acceptance for certain individual agencies. As shown, estimated failure rates for independent receipt and acceptance from previous audits were as high as 87 percent for one Army location (as reported in 2002) and, most recently, 63 percent for DHS (as reported in 2006). In contrast, we are estimating a 34 percent failure rate for this audit. Because prior audits have been restricted to individual agencies, we cannot state conclusively that the lower failure rate is attributable to improvements in internal controls governmentwide. However, some agencies with large purchase card programs, such as DOD, have implemented improved internal controls in response to our previous recommendations. Further, in 2005, OMB also issued Appendix B to Circular No. A-123 prescribing purchase card program guidance and requirements. These changes are positive steps in improving internal controls over the purchase card program. While only 3 percent of governmentwide purchase card transactions from July 1, 2005, through June 30, 2006, were purchases above the micropurchase threshold of $2,500, these transactions accounted for 44 percent of the dollars spent during that period. Because of the large dollar amount associated with these transactions, and additional requirements related to authorization than are required for micropurchases, we drew a separate statistical sample to test controls over these larger purchases. Specifically, we tested (1) proper purchase authorization and (2) independent receipt and acceptance. As part of our test of proper purchase authorization, we looked for evidence that adequate competition was obtained. If competition was not obtained, we asked for supporting documentation showing that competition was not required, for example, that the purchase was acquired from sole-source vendors. We estimated that 48 percent of the purchase card transactions over the micropurchase threshold failed our attribute tests. As shown in table 3, for 35 percent of purchases over the micropurchase threshold, cardholders failed to obtain proper authorization. Additionally, in 30 percent of the transactions, cardholders failed to provide sufficient evidence of independent receipt of the goods or services. Lack of proper authorization for purchases over the micropurchase limit. As table 3 indicates, 35 percent of purchases over the micropurchase limit were not properly authorized. To test for proper authorization, we looked for evidence of prior approval, such as a contract or other requisition document. For purchases above the micropurchase threshold, we also required evidence that the cardholder either solicited competition or provided reasonable evidence for deviation from this requirement, such as sole source justification. Of the 34 transactions that failed proper authorization, 10 transactions lacked evidence of competition. For example, one Army cardholder purchased computer equipment totaling over $12,000 without obtaining and documenting price quotes from three vendors as required by the FAR. The purchase included computers costing over $4,000 each, expensive cameras that cost $1,000 each, and software and other accessories—items that are supplied by a large number of vendors. In another example of failed competition, one cardholder at DHS purchased three personal computers totaling over $8,000. The requesting official provided the purchase cardholder with the computers’ specifications and a request that the item be purchased from the requesting official’s preferred vendor. We found that the cardholder did not apply due diligence by obtaining competitive quotes from additional vendors. Instead, the cardholder asked the requesting official to provide two “higher priced” quotes from additional vendors in order to justify obtaining the computers from the requesting official’s preferred source. In doing so, the cardholder circumvented the rules and obtained the items without competitive sourcing as required by the FAR. Lack of independent receipt and acceptance. As shown in table 3, we projected that 30 percent of the purchases above the micropurchase threshold did not have documented evidence that goods or services ordered and charged to a government purchase card account were received by someone other than the cardholder. Our testing of a nonrepresentative selection of accountable and pilferable property acquired with government purchase cards found that agencies failed to account for 458 of the 1,058 accountable and pilferable property items we tested. The total value of the items was over $2.7 million, and the purchase amount of the missing items was over $1.8 million. We used a nonrepresentative selection methodology for testing accountable property because purchase card data did not always contain adequate detail to enable us to isolate property transactions for statistical testing. Because we were not able to take a statistical sample of these transactions, we were not able to project inventory failure rates for accountable and pilferable property. Similarly, because the scope of our work was restricted to purchase card acquisitions, we did not audit agencies’ controls over accountable property acquired using other procurement methods. However, the extent of the missing property we are reporting on may not be restricted to items acquired with the government purchase cards, but may reflect control weaknesses in agencies’ management of accountable property governmentwide. The lost or stolen items included computer servers, laptop computers, iPods, and digital cameras. Our prior reports have shown that weak controls over accountable property purchased with government purchase cards increases the risk that items will not be reported and accounted for in property management systems. We acknowledge agency officials’ position that the purchase card program was designed to facilitate acquisition of goods and services, including property, and not specifically to maintain accountability over property. However, the sheer number of accountable property purchases made “over the counter” or directly from a vendor increases the risk that the accountable or pilferable property would not be reported to property managers for inclusion in the property tracking system. Unrecorded assets decrease the likelihood of detecting lost or stolen government property. In addition, if these items were used to store sensitive data, this information could be lost, stolen, or both without the knowledge of the government. Failure to properly account for pilferable and accountable property also increases the risk that agencies will purchase property they already own but cannot locate—further wasting tax dollars. Although each agency establishes its own threshold for recording and tracking accountable property, additional scrutiny is necessary for sensitive items (such as computers and related equipment) and items that are easily pilfered (such as cameras, iPods, and personal digital assistants (PDA)). Consequently, for this audit, we selected $350 as the threshold for our accountable property test. Standards for Internal Control in the Federal Government provides that an agency must establish physical control to secure and safeguard vulnerable assets. Examples include security for, and limited access to, assets such as cash, securities, inventories, and equipment, which might be vulnerable to risk of loss or unauthorized use. Failure to maintain accountability over property, including highly pilferable items, increases the risk of unauthorized use and lost and stolen property. Our accountable asset work consisted of identifying accountable and pilferable properties associated with transactions from both the statistical sample and data-mining transactions, requesting serial numbers from the agency and vendors, and obtaining evidence—such as a photograph provided by the agency—that the property was recorded, could be located, or both. In some instances, we obtained the photographs ourselves. We then evaluated each photograph to determine whether the photograph represented the accountable or pilferable item we selected for testing. Property items failed our physical property inventory tests for various reasons, including the following: the agency could not locate the item upon request and reported the item as missing, the agency failed to provide photographs, or the agency provided photographs of items where the serial numbers did not match the items purchased. In many instances, we found that agencies failed to provide evidence that the property was independently received or entered into the agency property book. Weak controls over accountable and pilferable property increase the risk that property will be lost or stolen and also increase the chance that the agency will purchase more of the same item because it is not aware that the item has already been purchased. The following descriptions further illustrate transactions that failed our property tests: The Army could not properly account for 16 server configurations containing 256 items that it purchased for over $1.5 million dollars. Despite multiple inquiries, the Army provided photographs of only 1 configuration out of 16, but did not provide serial numbers for that configuration to show that the photograph represented the items acquired as part of the transaction we selected for testing. Further, when we asked for inventory records as an acceptable alternative, the Army could not provide us evidence showing that it had possession of the 16 server configurations. A Navy cardholder purchased general office supplies totaling over $900. As part of this purchase, the cardholder bought a Sony digital camera costing $400 and an iPod for $200. In supporting documentation provided, the Navy stated that the cardholder, approving official, and requester had no recollection of requesting or receiving the iPods. To find out whether these pilferable items could have been converted for personal use and effectively stolen, we asked the Navy to provide a photograph of the camera and iPod, including the serial number. However, the Navy informed us that the items were not reported on a property tracking system and therefore could not be located. We found numerous instances of fraud, waste, and abuse related to the purchase card program at dozens of agencies across the government. Internal control weaknesses in agency purchase card programs directly increase the risk of fraudulent, improper, and abusive transactions. For instance, the lack of controls over proper authorization increases an agency’s risk that cardholders will improperly use the purchase card. As discussed in appendix II, our work was not designed to identify all instances of fraudulent, improper, and abusive government purchase card activity or estimate their full extent. Therefore, we did not determine and make no representations regarding the overall extent of fraudulent, improper, and abusive transactions governmentwide. The case studies identified in the tables that follow represent some of the examples that we found during our audit and investigation of the governmentwide purchase card program. We found numerous examples of fraudulent and potentially fraudulent purchase card activities. For the purpose of this report, we define fraudulent transactions as those where a fraud case had been adjudicated or was undisputed or a purchase card account had been compromised. Potentially fraudulent transactions are those transactions where there is a high probability of fraud, but where sufficient evidence did not exist for us to determine that fraud had indeed occurred. As shown in table 4, these transactions included (1) acquisitions by cardholders that were unauthorized and intended for personal use and (2) purchases appropriately charged to the purchase card but involving potentially fraudulent activity that went undetected because of the lack of integration among the processes related to the purchase, such as travel claims or missing property. In a few instances, agencies have taken actions on the fraudulent and potentially fraudulent transactions we identified. For example, some agency officials properly followed policies and procedures and filed disputes with the bank against fraudulent purchases that appeared on the card, and subsequently obtained refunds. However, in the most egregious circumstances, such as repeated fraudulent activities by the cardholders, sometimes over several years, the agencies did not take actions until months after the fraudulent activity occurred, or after we selected the transactions and requested documentation from the agencies for the suspicious transactions. Table 4 illustrates instances where we found fraud, or indications of fraud, from our data mining and investigative work. The following text further describes three of the fraudulent cases from table 4: Case 1 involves a cardholder who embezzled over $642,000 from the Forest Service’s national fire suppression budget from October 10, 2000, through September 28, 2006. This cardholder, a purchasing agent and agency purchase card program coordinator, wrote approximately 180 checks to a live-in boyfriend with whom the cardholder shared a bank account. Proceeds from the checks were used for personal expenditures, such as gambling, car and mortgage payments, dinners, and retail purchases. Although the activities occurred repeatedly over a 6-year period, the embezzled funds were undetected by the agency until USDA’s Office of Inspector General received a tip from a whistleblower in 2006. In June 2007, the cardholder pled guilty to one count of embezzlement and one count of tax fraud. As part of the plea agreement, the cardholder agreed to pay restitution of $642,000. Further, in November 2007, the cardholder was sentenced to 21 months imprisonment followed by 36 months supervised release. Case 2 involves a potential theft of government property. A Navy cardholder purchased 19 pilferable items totaling $2,200 from CompUSA without proper authorization or subsequent review of the purchase transaction. After extensive searches, the Navy provided evidence that only 1 of the 19 items listed on the invoice—an HP LaserJet printer purchased for $150—was found. Other items that were lost or stolen included five iPods; a PDA; iPod travel chargers, adapters, flash drives, leather accessories, and two 17-inch LCD monitors—all highly pilferable property that can be easily diverted for personal use. According to officials from the Navy, at the time of the purchase, the command did not have a requirement for tracking highly pilferable items. Additionally, all members involved in the transaction had since transferred and the agency did not have the capability to track where the items might have gone. Navy officials also informed us that the command issued a new policy requiring that pilferable items be tracked. Case 4 involves a USPS postmaster who fraudulently used the government purchase card for personal gain. Specifically, from April 2004, through October 2006, the cardholder made more than 15 unauthorized charges from various online dating services totaling more than $1,100. These were the only purchases made by this cardholder during our audit period, yet the cardholder’s approving official did not detect any of the fraudulent credit card activity. According to USPS officials, this person was also under an internal administrative investigation for viewing pornography on a government computer. Based on the administrative review, the cardholder was removed from his position in November 2006 after working out an agreement with USPS in which he was authorized to remain on sick leave until his retirement date in May 2007. In April the USPS Office of Inspector General issued a demand letter and recovered the fraudulent Internet dating service charges. Our data mining identified numerous examples of improper and abusive transactions. Improper transactions are those purchases that although intended for government use, are not permitted by law, regulation, or government/agency policy. Examples we found included (1) purchases that were prohibited or otherwise not authorized by federal law, regulation, or government/agency policy and (2) split purchases made to circumvent the cardholder single-purchase limit or to avoid the need to obtain competition on purchases over the $2,500 micropurchase threshold. Abusive purchases are those where the conduct of a government organization, program, activity, or function fell short of societal expectations of prudent behavior. We found examples of abusive purchases where the cardholder (1) purchased goods or services at an excessive cost (e.g., gold plated) or (2) purchased an item for which government need was questionable. Table 5 identifies examples of improper and abusive purchases. The following text further describes four of the cases in table 5: Case 2 relates to a cardholder who is a 20-year veteran at FAS, a unit within USDA. At the end of fiscal year 2006, the cardholder purchased two vehicles—a Toyota Land Cruiser and Toyota Sienna—on two separate days for two separate USDA offices overseas. Although the vehicles appeared to have been shipped overseas for a legitimate government need, our investigative work found that these purchases were made in violation of USDA purchase card policies and with the implicit agreement by FAS policyholders as follows: According to written communications at FAS, the requester for one of the cars had a “large chunk of money that needed to be used before the end of the fiscal year (2006).” The requester requested that the vehicle be purchased in the United States, and then shipped overseas because it was not possible to finalize the purchase during fiscal year 2006 if the agency was to purchase the vehicle in the country where the office was located. The cardholder stated that he wrote three checks (two at $25,000 each and a third at $7,811) to purchase the Land Cruiser because the checks have a $25,000 limit printed on them. The convenience check fee on the three checks was over $1,000. Pursuant to our investigation, the cardholder informed his supervisor that he intentionally violated agency policy, which requires that vehicles be acquired through the GSA unless a waiver is obtained. The cardholder stated that he disagreed with USDA policy requiring GSA involvement in car acquisition because it was too cumbersome and that USDA needed to issue new policies. We reviewed supporting documentation showing that the vehicles were shipped overseas to the units that purchased them, but we did not perform work to determine whether the year-end purchase was necessary. Agency management did not take action when they were made aware of the cardholder’s significant violation of agency policy. In case 3, four DOD cardholders purchased over $77,000 in clothing and accessories at high-end clothing and other sporting goods stores, including over $45,000 at high-end retailers such as Brooks Brothers. The Brooks Brothers invoices showed that the cardholders paid about $2,300 per person for a number of servicemembers for tailor-made suits and accessories—$7,000 of which were purchased a week before Christmas. According to the purchase card holder, DOD purchased these items to provide servicemembers working at American embassies with civilian attire. While the Department of Defense Financial Management Regulation authorizes a “civilian clothing allowance” when servicemembers are directed to dress in civilian clothing when performing official duty, the purchase card transactions made by these individuals are far greater than the maximum allowable initial civilian clothing allowance of $860 per person. Case 7 relates to the $13,500 that USPS spent on food at the National Postal Forum in Orlando, Florida, in 2006. For this occasion, USPS paid for 81 dinners averaging over $160 per person for customers of the Postal Customer Council at an upscale steak restaurant. Further, USPS paid for over 200 appetizers and over $3,000 of alcohol, including more than 40 bottles of wine costing more than $50 each and brand- name liquor such as Courvoisier, Belvedere, and Johnny Walker Gold. In case 9, a NASA cardholder purchased two 60GB iPods for official data storage. During the course of our audit, we found that the iPods were used for personal use, such as to store personal photos, songs, and video clips. Further, we question the federal government’s need to purchase iPods for data storage when other data storage devices without audio and video capabilities were available at lower cost. The purchase card continues to be an effective tool that helps agencies reduce transaction costs for small purchases and provides flexibility in making acquisitions. While the overall failure rates associated with governmentwide purchase card transactions have improved in comparison to previous failure rates at specific agencies, breakdowns in internal controls over the use of purchase cards leave the government highly vulnerable to fraud, waste, and abuse. Problems continue to exist in the area of authorization of transactions, receipt and acceptance, and accountability of property bought with purchase cards. This audit demonstrates that continued vigilance over purchase card use is necessary if agencies are to realize the full potential of the benefits provided by purchase cards. We are making the following 13 recommendations to improve internal control over the government purchase card program and to strengthen monitoring and oversight of purchase cards as part of an overall effort to reduce instances of fraudulent, improper, and abusive purchase card activity. We recommend that the Director of OMB: Issue a memorandum reminding agencies that internal controls over purchase card activities, as detailed in Appendix B of OMB Circular No. A-123, extend to the use of convenience checks. Issue a memorandum to agency heads requesting the following: Cardholders, approving officials, or both reimburse the government for any unauthorized or erroneous purchase card transactions that were not disputed. When an official directs a cardholder to purchase a personal item for that official, and management later determines that the purchase was improper, the official who requested the item should reimburse the government for the cost of the improper item. Consistent with the goals of the purchase card program, to streamline the acquisition process, we recommend that the Administrator of GSA, in consultation with the Department of the Treasury’s Financial Management Service: Provide agencies guidance on how cardholders can document independent receipt and acceptance of items obtained with a purchase card. The guidelines should encourage agencies to identify a de minimis amount, types of purchases that do not require documenting independent receipt and acceptance, or both and indicate that the approving official or supervisor took the necessary steps to ensure that items purchased were actually received. Provide agencies guidance regarding what should be considered sensitive and pilferable property. Because purchase cards are frequently used to obtain sensitive and pilferable property, remind agencies that computers, palm pilots, digital cameras, fax machines, printers and copiers, iPods, and so forth are sensitive and pilferable property that can easily be converted to personal use. Instruct agencies to remind government travelers that when they receive government-paid-for meals at conferences or other events, they must reduce the per diem claimed on their travel vouchers by the specified amount that GSA allocates for the provided meal. Provide written guidance or reminders to agencies: That cardholders need to obtain prior approval or subsequent review of purchase activity for purchase transactions that are under the micropurchase threshold. That property accountability controls need to be maintained for pilferable property, including those items obtained with a purchase card. That cardholders need to timely notify the property accountability officer of pilferable property obtained with the purchase card. That property accountability officers need to promptly record, in agency property systems, sensitive and pilferable property that is obtained with a purchase card. That, consistent with the guidance on third-party drafts in the Department of the Treasury’s Treasury Financial Manual, volume 5, chapter 4-3000, convenience checks issued on the purchase card accounts should be minimized, and that convenience checks are only to be used when (1) a vendor does not accept the purchase cards, (2) no other vendor that can provide the goods or services can reasonably be located, and (3) it is not practical to pay for the item using the traditional procurement method. That convenience check privileges of cardholders who improperly use convenience checks be canceled. We received written comments on a draft of this report from the Acting Controller of OMB (see app. III) and the Administrator of GSA (see app. IV). In response to a draft of our report, OMB agreed with all three recommendations. OMB agreed that the efficiencies of the purchase card program are not fully realized unless federal agencies implement strong and effective controls to prevent purchase card waste, fraud, and abuse. To that end, OMB noted that it had proactively designated government charge card management as a major focus area under Appendix B of Circular No. A-123, Improving the Management of Government Charge Card Programs. With respect to the recommendations contained in this report, OMB is proposing to issue further guidance reminding agencies that Appendix B extends to convenience checks as well as government charge cards, and that agency personnel have financial responsibility with regard to unauthorized and erroneous purchase card transactions. While GSA wholly or partially concurred with four recommendations, GSA generally disagreed with the majority of our recommendations. Specifically, GSA stated that it was not within the scope of its authority to issue guidance to agencies with respect to asset accountability and receipt and acceptance of items purchased with government purchase cards, as these are not strictly purchase card issues. Further, GSA stated that there are more effective ways to deal with purchase card misuse or abuse than issuing “redundant” policy reminders or guidance. It also took exception to our testing methodology. We agree with GSA that the problems we identified with property accountability and receipt and acceptance go beyond the bounds of strictly purchase card issues. However, our work over the last several years has consistently shown substantial problems with property accountability and independent receipt and acceptance of goods and services, problems that arose because of the flexibility provided by the purchase card program. We do not believe that our recommendations related to policy guidance and reminders to strengthen internal controls are redundant—our previous recommendations in this area had been targeted at specific agencies we audited. With respect to governmentwide purchase card issues, GSA’s role as the purchase card program manager puts it in a unique position to identify challenges to agency internal control systems and assist agencies with improving their internal controls governmentwide. We are encouraged by OMB’s support for aggressive and effective controls over purchase cards, and believe that GSA can seek OMB support to overcome the perceived lack of authority. We believe that GSA has a number of tools already at its disposal, such as online training and annual conferences, where GSA could easily remind cardholders and approving officials to pay particular attention to governmentwide issues, including asset accountability and independent receipt and acceptance of goods and services identified in this report. We also reiterate support for our testing methodology, which included systematic testing of key internal controls through statistical sampling. The following contains more detailed information on GSA’s comments, along with our response. GSA concurred with 3 of 10 recommendations. Specifically, GSA concurred with 2 recommendations to improve controls over convenience checks and 1 recommendation related to approval of purchases below the micropurchase threshold. Specifically, GSA agreed to provide written guidance to agencies that convenience check use should be minimized, and that improper use of convenience checks would result in cancellation of convenience check privileges. As part of its concurrence, GSA provided that it is not practical to strictly prohibit the use of convenience checks given the unique nature of some suppliers or services acquired by agencies and vendor refusal to accept purchase cards. It was not our intent to completely eliminate the use of convenience checks. As such, we clarified our recommendation to require only that the cardholder make a “reasonable”—not absolute—effort to locate other vendors that can provide the same goods and services and that accept the purchase card prior to using convenience check. The requested revision is consistent with our intent and therefore we have made the necessary change to our recommendations. With respect to the third recommendation related to approval of micropurchases, GSA agreed that cardholders need to obtain prior approval or subsequent review of purchase card activity for purchase transactions that are under the micropurchase threshold. However, GSA believed that OMB needed to take the lead and incorporate this change in its Circular No. A-123. GSA offered to help OMB revise Circular No. A-123 in this regard. GSA stated that it partially concurred with our recommendation to remind travelers to reduce the per diem claims on their travel vouchers when meals are provided by the government. However, based on its response, it appears that GSA substantially agrees with our recommendation, and that the GSA Office of Governmentwide Policy will issue this guidance. In actuality, GSA concurred with our recommendation but disagreed that this was a purchase card issue. Further, GSA took exception as to whether the requirement to deduct per diem applies to continental breakfasts, stating that continental breakfasts did not constitute “full breakfasts.” Thus, GSA stated that it needs to convene stakeholders in the GSA travel policy community to consider whether the requirement for deducting per diem should be applied to continental breakfasts. We disagree with this assessment. If the costs of the continental breakfasts were in fact not significant, we would not have reported on this finding; however, the basis of our recommendation rests primarily on the fact that GSA itself paid for continental breakfasts costing $23 per person, which was greater than the portion of government per diem established by GSA for breakfast in any city in the United States. GSA then proceeded to reimburse the same employees the breakfast portion of per diem—in effect paying twice for breakfasts. We disagree with GSA that this is an appropriate treatment of continental breakfast, as it implies that it is appropriate for taxpayers to pay twice for a government traveler’s meal. Consequently, we reiterate the need for GSA to promote prudent management of taxpayer’s money, and our support for requiring travelers to reduce their per diem if they took advantage of the continental breakfasts provided. GSA disagreed with all of our recommendations related to receipt and acceptance and controls over accountable and pilferable property. GSA stated that these issues were not within the purview of the GSA SmartPay® program or the scope of GSA SmartPay® contracts. Further, GSA stated that other approaches would be more effective at addressing purchase card abuse and misuse than issuing “redundant” policy guidance and reminders. With respect to receipt and acceptance, GSA stated that it did not have the authority to encourage agencies to identify a de minimis amount, types of items that do not require receipt and acceptance, or both, or to determine how approving officials should document receipt and acceptance. With respect to accountable property, GSA did not believe that it should provide reminders to agencies that computers and similar items are sensitive and pilferable property that can easily be converted to personal use. GSA argued that what constitutes sensitive and pilferable property is defined by agencies and is not within its purview. GSA also believes that it does not have authority to remind cardholders to maintain accountability of, and notify property managers when, pilferable property is acquired with purchase cards. Finally, GSA does not believe that it can issue reminders to property managers to record, in a timely manner, pilferable property acquired with purchase cards in their property management systems. GSA suggested we modify these recommendations accordingly. With respect to receipt and acceptance, we agree that GSA alone should not issue guidance concerning agencies’ internal controls over purchase cards and related payment process. We reiterate that we did not ask GSA to take actions in isolation—instead, we recommended that GSA work with the Department of the Treasury’s Financial Management Service to provide guidance on improving internal controls while at the same time streamlining the acquisition process. After all, streamlining the acquisition process is a key objective of the purchase card program. We believe this could be achieved, in part, by requiring independent receipt and acceptance only for items above a de minimis amount. Further, governmentwide guidance in this area would not be redundant—the fact that no current guidance exists demonstrates the need for consistent policy governmentwide that all agencies can follow. Consistent guidance is crucial to engendering taxpayers’ confidence in the purchase card program—as we stated above, our previous audits and our current work showed that ineffective receipt and acceptance of goods and services acquired with the purchase card is a widespread, governmentwide problem. Furthermore, OMB indicated that it was extremely concerned about purchase card abuse and supported our recommendations designed to improve internal controls over the program. We believe that GSA can adopt a proactive approach and coordinate with OMB to obtain its support to overcome the perceived obstacles. In our opinion, the purchase card program will continue to expose the federal government—and the taxpayers—to fraud, waste, and abuse, unless GSA helps facilitate a governmentwide solution. Similarly, GSA argued that it did not have the authority to take the recommended actions with respect to property accountability. As with independent receipt and acceptance, our work continues to demonstrate that accountability for property acquired with purchase cards is ineffective across many agencies. For example, the purchase card program provides cardholders the ability to acquire sensitive and pilferable items directly from vendors. This process results in cardholders bypassing the normal property receipt and acceptance procedures, which increases the risk that the item will not be recorded in an agency’s list of accountable property. GSA needs to recognize this risk (and other inherent risks) created by purchase card use and proactively work with agencies to improve the accountability of property acquired with government purchase cards. We also believe that our recommendations fully take into account the extent of GSA’s authority—to that end, our recommendation called for GSA to provide agencies guidance and reminders to improve internal controls over asset accountability. Even though GSA already issued guidance related to the proper use of the purchase card program through online training, refresher courses, and annual conferences, GSA should go a step further and address control weaknesses related to property accountability and receipt and acceptance. GSA’s position contrasted sharply with OMB, which, in its comments on our report, expressed support for aggressive and effective controls over purchase cards. We believe that GSA can take advantage of the diverse tools already at its disposal, such as online training and annual conferences, with which GSA could easily remind cardholders and approving officials to pay particular attention to governmentwide issues, including asset accountability and independent receipt and acceptance of goods and services identified in this report. Overall, our recommendations are focused on GSA taking a proactive approach to improve the success of the purchase card program. Last year, the federal government spent nearly $18 billion using purchase cards. While the purchase card program has achieved significant savings, a program of this magnitude needs to focus on both preventive and detective controls to prevent fraud, waste, and abuse. In its response, GSA also pointed out that the new SmartPay® 2 contract should provide better management tools to agencies. However, the changes GSA identified in SmartPay® 2 were mostly related to data mining for fraud, waste, and abuse after a potentially fraudulent or improper transaction had taken place, but did not address the issues we raised in this report. As our previous work indicated, while detection can help reduce fraud, waste, and abuse, preventive controls are a more effective and less costly means to minimize fraud, waste, and abuse. The recommendations we made, to which GSA took exception, were meant to improve these up-front controls. GSA also took exception to our methodology, arguing that we improperly failed items as part of our control testing. GSA argued that some unauthorized purchases were still appropriate purchases. We believe that this argument is flawed. Standards for Internal Control in the Federal Government states that transactions should be authorized and executed only by persons acting within the scope of their authority. In other words, authorization is the principal means of assuring that only valid transactions are initiated or entered into and, consequently, without authorization, adequate assurance does not exist that the items purchased were for authorized purposes only. Our statistical sampling was designed to test authorization control, and the results we reported reflected items that did not pass this attribute. Such attribute testing is a widely accepted and statistically valid methodology for internal control evaluations. GSA also stated that our report did not adequately address the areas of personal responsibility and managerial oversight. We disagree. We recommended that OMB require agencies to hold cardholders financially responsible for improper and wasteful purchases, and OMB agreed to implement our recommendations; we believe that this would contribute to holding cardholders accountable to management for their actions. Further, our past reports on purchase card management have always focused on managerial oversight. However, it is not feasible within the scope of a governmentwide audit to test managerial oversight at every government agency. Consequently, we focused on providing GSA, the manager of the governmentwide purchase card program, with recommendations that could contribute to improving management oversight at the agencies. Finally, GSA disagreed with our characterization that travelers who did not reduce the per diem claimed on their travel voucher when dinners were provided may be engaging in potentially fraudulent activities. Because we are unable to establish that these travelers acted with the requisite knowledge and willfulness necessary to establish either a false statement under 18 U.S.C. §1001 or a false claim, we have characterized such activities as potentially fraudulent. GSA’s and OMB’s comments are reprinted in appendixes III and IV. As agreed with your offices, unless you announce the contents of this report earlier, we will not distribute it until 30 days from its date. At that time, we will send copies of this report to the Director of OMB and the Administrator of GSA. We will make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Purchase Cards: Control Weaknesses Leave DHS Highly Vulnerable to Fraudulent, Improper, and Abusive Activity. GAO-06-1117. Washington, D.C.: September 28, 2006. Purchase Cards: Control Weaknesses Leave DHS Highly Vulnerable to Fraudulent, Improper, and Abusive Activity. GAO-06-957T. Washington, D.C.: July 19, 2006. Lawrence Berkeley National Laboratory: Further Improvements Needed to Strengthen Controls Over the Purchase Card Program. GAO-04-987R. Washington, D.C.: August 6, 2004. Lawrence Livermore National Laboratory: Further Improvements Needed to Strengthen Controls Over the Purchase Card Program. GAO-04-986R. Washington, D.C.: August 6, 2004. Pacific Northwest National Laboratory: Enhancements Needed to Strengthen Controls Over the Purchase Card Program. GAO-04-988R. Washington, D.C.: August 6, 2004. Sandia National Laboratories: Further Improvements Needed to Strengthen Controls Over the Purchase Card Program. GAO-04-989R. Washington, D.C.: August 6, 2004. VHA Purchase Cards: Internal Controls Over the Purchase Card Program Need Improvement. GAO-04-737. Washington, D.C.: June 7, 2004. Purchase Cards: Increased Management Oversight and Control Could Save Hundreds of Millions of Dollars. GAO-04-717T. Washington, D.C.: April 28, 2004. Purchase Cards: Steps Taken to Improve DOD Program Management, but Actions Needed to Address Misuse. GAO-04-156. Washington, D.C.: December 2, 2003. Forest Service Purchase Cards: Internal Control Weaknesses Resulted in Instances of Improper, Wasteful, and Questionable Purchases. GAO-03-786. Washington, D.C.: August 11, 2003. HUD Purchase Cards: Poor Internal Controls Resulted in Improper and Questionable Purchases. GAO-03-489. Washington, D.C.: April 11, 2003. FAA Purchase Cards: Weak Controls Resulted in Instances of Improper and Wasteful Purchases and Missing Assets. GAO-03-405. Washington, D.C.: March 21, 2003. Purchase Cards: Control Weaknesses Leave the Air Force Vulnerable to Fraud, Waste, and Abuse. GAO-03-292. Washington, D.C.: December 20, 2002. Purchase Cards: Navy is Vulnerable to Fraud and Abuse but Is Taking Action to Resolve Control Weaknesses. GAO-02-1041. Washington, D.C.: September 27, 2002. Purchase Cards: Control Weaknesses Leave Army Vulnerable to Fraud, Waste, and Abuse. GAO-02-732. Washington, D.C.: June 27, 2002. Government Purchase Cards: Control Weaknesses Expose Agencies to Fraud and Abuse. GAO-02-676T. Washington, D.C.: May 1, 2002. Purchase Cards: Control Weaknesses Leave Two Navy Units Vulnerable to Fraud and Abuse. GAO-02-32. Washington, D.C.: November 30, 2001. We performed a forensic audit of executive agencies’ purchase card activity for the 15 months ending September 30, 2006. Specifically, we (1) determined the effectiveness of internal controls intended to minimize fraudulent, improper, and abusive transactions by testing two internal control attributes related to transactions taken from two statistical samples and (2) identified specific examples of potentially fraudulent, improper, and abusive transactions through data mining and investigations. We obtained the databases containing agency purchase and other government charge card transactions for the 12-month period ending June 30, 2006, from Bank of America, Citibank, JP Morgan Chase, Mellon Bank, and U.S. Bank. The databases contained purchase, travel, and fleet card transactions. Using information provided by the banks, we queried the databases to identify transactions specifically related to purchase cards. We performed other procedures—including reconciliation to purchase card data that the General Services Administration (GSA) published—to confirm that the data were sufficiently reliable for the purposes of our report. Our statistical sampling work covered purchase card activity at executive agencies. We define executive agencies as federal agencies that are required to follow the Federal Acquisition Regulation (FAR), including executive departments, independent establishments, and wholly owned federal government corporations as defined by the United States Code. We excluded transactions from the legislative and judicial branches, entities under treaty with the United States, and federal agencies with specific authority over their own purchase card programs. To assess compliance with key internal controls, we extracted and tested two statistical (probability) samples of 96 transactions each. The first sample consisted of transactions exceeding $50 taken from a population of over 16 million purchase card transactions totaling almost $14 billion. We also selected a second sample from the population of over 600,000 transactions totaling nearly $6 billion that exceeded the $2,500 micropurchase threshold. We selected this second sample because of additional acquisition requirements associated with purchases over the micropurchase threshold, and the high dollar amount associated with these transactions. Specifically, while only 3 percent of governmentwide purchase card transactions from July 1, 2005, through June 30, 2006, were over the micropurchase threshold, they accounted for 44 percent of the total dollars spent during that period. With our probability sample, each transaction in the population had a nonzero probability of being included, and that probability could be computed for any transaction. Each sample element was subsequently weighted in the analysis to account statistically for all the transactions in the population, including those that were not selected. Because we followed a probability procedure based on random selection, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent interval (e.g., plus or minus 10 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. All percentage estimates from the samples of executive agency purchase card activity have sampling errors (confidence interval widths) of plus or minus 10 percentage points or less. Our audit of key internal controls focused on whether agencies provided adequate documentation to substantiate that (1) purchase card transactions were properly authorized and (2) goods and services acquired with purchase cards were independently received and accepted. As part of our tests of internal controls, we reviewed applicable federal laws and regulations related to the FAR and purchase card uses. We also identified and applied the internal control principles contained in Standards for Internal Control in the Federal Government, Audit Guide: Auditing and Investigating the Internal Control of Government Purchase Card Programs, and agencies’ purchase card policies and procedures. Furthermore, for purchases exceeding the micropurchase threshold of $2,500, we tested FAR requirements that the cardholder use required vendors and promote competition by soliciting bids—or justify the departure from this requirement in writing. To determine whether a transaction was properly authorized, we reviewed documentation to ascertain if an individual other than the cardholder was involved in the approval of the purchase. To determine that proper authorization existed, we used reasonable evidence for authorization of micropurchases from $50 to $2,500, such as purchase requests from responsible officials, requisitions, e-mails, and other documents that identify an official government need, including blanket authorizations for routine purchases with subsequent approval. For purchase card transactions exceeding the micropurchase threshold of $2,500, we required prior purchase authorization, such as a contract, a requisition, or other approval document. Additionally, we looked for evidence that the cardholder used required vendors (as required by the Javits-Wagner-O’Day Act (JWOD)) and solicited quotes to promote competition (or provided evidence justifying departure from this requirement, such as an annotation justifying the use of a sole source). To determine whether goods or services were independently received and accepted, we reviewed supporting documentation provided by the agency. For each transaction, we compared the quantity, price, and item descriptions on the vendor invoice and shipping receipt to the purchase requisition to verify that the items received and paid for were actually the items ordered. We also determined whether evidence existed that a person other than the cardholder was involved in the receipt of the goods or services purchased. We concluded that independent receipt and acceptance existed if the vendor invoice, shipping documents, and receipt materially matched the transaction data, and if the signature or initial of someone other than the cardholder was on the sales invoice, packing slip, bill of lading, or any other shipping or receiving document indicating receipt. For statistical sample and data-mining transactions containing accountable or highly pilferable property, we performed an inventory to determine whether executive agencies maintained accountability over the physical property items obtained with government purchase cards. Because each agency had its own threshold for accountable property, we were not able to test accountable property against each agency’s threshold for this governmentwide audit. Consequently, we defined accountable property as any property item exceeding a $350 threshold and containing a serial number. We defined highly pilferable items as items that can be easily converted to personal use, such as cameras, laptops, cell phones, and iPods. We selected highly pilferable property at any price if it was easily converted to personal use. The purchase card data provided by the banks did not always contain adequate details to enable us to isolate property transactions for statistical testing. Because we were not able to take a statistical sample of these transactions, we were not able to project failure rates for accountable and pilferable property. Consequently, our tests of property accountability were performed on a nonrepresentative selection of property that we identified when a transaction selected for statistical sampling or data mining contained accountable and pilferable property. For these property items, we identified serial numbers from supporting documentation provided by the agency and, in some cases, by contacting the vendors themselves. To minimize travel costs associated with conducting a physical inventory governmentwide, we requested that each agency provide photographs of the property items, which we compared against the serial numbers originally provided. When we were unable to obtain serial numbers from supporting documentation or from the vendors, we gave the agency the benefit of the doubt and accepted the serial numbers shown in agency-provided photographs as long as the product(s) and quantity matched. In some isolated instances, we performed the physical inventory ourselves. To identify examples of fraudulent, improper, and abusive purchase card activity, we data mined purchase card transactions from July 1, 2005, through September 30, 2006. This period contained an additional 3 months of data subsequent to the period included in our statistical samples. For data-mining purposes, we also included transactions from federal agencies that had been granted specific authority over their own purchase card programs, such as the U.S. Postal Service. In general, we analyzed purchase card data for merchant category codes and vendor names that were more likely to offer goods, services, or both that are on executive agencies’ restricted/prohibited lists, personal in nature, or of questionable government need. We identified split purchases by extracting multiple purchase transactions made by the same cardholder at the same vendor on the same day. For year-end purchases, we identified transactions from purchase card accounts where year-end activity is high compared to the rest of the year. With respect to convenience checks, we used various criteria, including identifying instances where convenience checks were written to cash or payees not normally associated with procurement needs and where a large number of convenience checks were written to a single payee, among others. We analyzed the banks’ databases for detailed transaction data, whenever available, for accountable property and highly pilferable items. We then requested and reviewed supporting documentation for over 550 transactions among the thousands we identified. We conducted investigative work, which included additional inquiries and data analysis, when applicable. While we identified fraudulent, improper, and abusive transactions, our work was not designed to identify and we cannot determine the extent of fraudulent, improper, or abusive transactions occurring in the population of governmentwide purchase card transactions. We assessed the reliability of the data provided by (1) performing various testing of required data elements, (2) reviewing financial statements of the five banks for information about the data and systems that produced them, and (3) interviewing bank officials knowledgeable about the data. In addition, we verified that totals from the databases agreed with the total purchase card activity provided by GSA and published on its Web site, in totality and for selected agencies. We determined that the data were sufficiently reliable for the purposes of our report. We conducted this performance audit from September 2006 through February 2008, in accordance with U.S. generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. Gregory D. Kutz, (202) 512-6722 or [email protected]. In addition to the contact above, Tuyet-Quan Thai, Assistant Director; James Ashley; Beverly Burke; Bruce Causseaux; Sunny Chang; Dennis Fauber; Danielle Free; Jessica Gray; Ryan Guthrie; Ken Hill; Ryan Holden; Aaron Holling; John Kelly; Delores Lee; Barbara Lewis; Andrew McIntosh; Richard McLean; Aaron Piazza; John Ryan; Barry Shillito; Chevalier Strong; Scott Wrightson; Tina Wu; and Michael Zola made key contributions to this report.
Over the past several years, GAO has issued numerous reports and testimonies on internal control breakdowns in certain individual agencies' purchase card programs. In light of these findings, GAO was asked to analyze purchase card transactions governmentwide to (1) determine whether internal control weaknesses existed in the government purchase card program and (2) if so, identify examples of fraudulent, improper, and abusive activity. GAO used statistical sampling to systematically test internal controls and data mining procedures to identify fraudulent, improper, and abusive activity. GAO's work was not designed to determine the overall extent of fraudulent, improper, or abusive transactions. Internal control weaknesses in agency purchase card programs exposed the federal government to fraud, waste, abuse, and loss of assets. When testing internal controls, GAO asked agencies to provide documentation on selected transactions to prove that the purchase of goods or services had been properly authorized and that when the good or service was delivered, an individual other than the cardholder received and signed for it. Using a statistical sample of purchase card transactions from July 1, 2005, through June 30, 2006, GAO estimated that nearly 41 percent of the transactions failed to meet either of these basic internal control standards. Using a second sample of transactions over $2,500, GAO found a similar failure rate--agencies could not demonstrate that 48 percent of these large purchases met the standard of proper authorization, independent receipt and acceptance, or both. Breakdowns in internal controls, including authorization and independent receipt and acceptance, resulted in numerous examples of fraudulent, improper, and abusive purchase card use. These examples included instances where cardholders used purchase cards to subscribe to Internet dating services, buy video iPods for personal use, and pay for lavish dinners that included top-shelf liquor. GAO identified some of the case studies, including one case where a cardholder used the purchase card program to embezzle over $642,000 over a period of 6 years from the Department of Agriculture's Forest Service firefighting fund. This cardholder was sentenced to 21 months in prison and ordered to pay full restitution. In addition, agencies were unable to locate 458 items of 1,058 total accountable and pilferable items totaling over $2.7 million that GAO selected for testing. These missing items, which GAO considered to be lost or stolen, totaled over $1.8 million and included computer servers, laptop computers, iPods, and digital cameras. For example, the Department of the Army could not adequately account for 256 items making up 16 server configurations, each of which cost nearly $100,000.
You are an expert at summarizing long articles. Proceed to summarize the following text: OPIC was established by the Foreign Assistance Act of 1969 (P.L. 91-175, Dec. 30, 1969) to pursue the U.S. foreign policy of mobilizing and facilitating the participation of U.S. private capital and skills in the economic and social advancement of developing countries. In carrying out this responsibility, OPIC took over the investment guarantee and promotion functions of the U.S. Agency for International Development. In the early 1970s, the U.S. approach to foreign assistance began to shift from one of providing government aid for infrastructure building and large capital projects to providing assistance to meet basic human needs. OPIC’s role was to support market-oriented private investment in various sectors. More recently, the World Bank has estimated that $200 billion would be needed annually over the next 10 years to meet the infrastructure needs of developing countries. Obtaining this level of private investment will be a major challenge given the economic and political characteristics of emerging markets and the unique risks inherent in each project. Project financing is emerging as an important component in infrastructure development. OPIC’s programs are designed to promote overseas investment and assume some of the associated risks for investors. Specifically, OPIC offers direct loans and loan guarantees to U.S.-sponsored joint ventures abroad, supports private investment funds that provide equity for projects abroad, and provides political risk insurance to U.S. investors. The political risk insurance covers investors for up to 20 years against losses due to currency inconvertibility, political violence, and expropriation. OPIC collects premiums and fees from the private sector for insurance and financing services. OPIC finance and insurance activities are backed by the full faith and credit of the U. S. government and are limited to a total exposure of $23 billion in fiscal year 1997. OPIC services are available in some 140 developing countries, although OPIC does not operate in some countries, largely for U.S. foreign policy reasons. Projects eligible for OPIC assistance include new investments, privatizations, and expansions or modernization of existing plants. The sectors OPIC supports include power, financial services, telecommunications, and oil and gas. To obtain OPIC support, investors must meet specific criteria, including U.S. ownership requirements. Over the years, Congress has placed various requirements on OPIC’s authority to support U.S. investment. For example, in carrying out its activities, OPIC is to administer its entire portfolio (financing, insurance, and reinsurance operations) on a self-sustaining basis and in a manner that ensures that the projects it supports are economically and financially sound; refuse support for any investment in countries that are not taking steps to adopt and implement internationally recognized worker rights; and decline participation in investments that are likely to significantly reduce U.S. domestic employment levels or pose an unreasonable or major environmental, health, or safety hazard. A changing global environment has reduced the perception of risk for the investors we spoke with in emerging markets. Economic growth and liberalization have created investment opportunities in sectors that were previously dominated by government-owned companies or were simply off limits to foreign investors. Many countries, for example, have privatized their power and telecommunication sectors and enacted laws that permit foreign ownership, resulting in dramatic increases in foreign investment. More recently, private providers of project finance and political risk insurance are increasingly available to assist investors. However, according to many of the firms we surveyed, markets still exist where they are unable to obtain private finance or insurance services. As a consequence, they seek public support. Public support includes direct loans, loan guarantees, and political risk insurance from OPIC and the U.S. Eximbank; foreign agencies that provide such services (often called export credit agencies); or multilateral financial institutions, such as the World Bank. The privatization of public enterprises, legal and regulatory reforms, and a more stabilized political and economic environment in developing counties, among other changes, have led to an increase in total private capital flows. As shown in figure 1, private capital flows to finance infrastructure projects and other private investments overseas have increased from $26 billion in 1986 to $246 billion in 1996. During the 1990s, private sector finance has increased dramatically, especially to Asian and Latin American developing countries, despite setbacks associated with the Mexican peso crisis. Private flows going to infrastructure reflect these overall increases, particularly in commercial lending devoted to project finance. According to a 1996 International Finance Corporation report, these private infrastructure investments would not have seemed possible 10 years ago. Today, more and more countries are introducing competition and private participation in infrastructure ownership and management. The 34 power and telecommunications companies that we surveyed indicated that their investment decisions have been significantly influenced by the recent developments in emerging markets. In general, 30 of the companies stated that changes in the legal and regulatory environment in emerging markets have led them to seek investments in countries where they had not invested in the past. At the same time, the U.S. power market matured, and U.S. power companies began seeking investment opportunities in emerging markets. The rise in overseas private investment has been accompanied by increases in investment support by public providers of finance and insurance as well as increases in private insurance coverage in some markets. Three countries—Japan, the United States, and Germany—are the largest public providers of political risk insurance. (See app. II, which identifies features of the services provided by the major public providers of political risk insurance.) Lloyd’s of London, the American Insurance Group, and Exporters Insurance Corporation—three major private insurers—have recently increased their insurance coverage. Globally, public providers have increased investor coverage. According to the Berne Union, new investments insured by its members rose annually between 1991 and 1996, going from $7.1 billion to $15.2 billion. As of the end of 1996, the cumulative amount of investment covered by Berne Union members was $43.4 billion. According to data collected directly from the major public providers of political risk insurance, Japan led all public providers with $13.9 billion in cumulative exposure. OPIC was second with $13.4 billion in exposure, and the German public provider was third with $7.8 billion in exposure. These public insurers have traditionally dominated the public risk insurance market. Although the major public providers generally offer investment services in the same countries, each of the major providers’ business tends to concentrate in different markets. OPIC, for example, concentrates in Latin America, the Japanese in Asia, and the Germans in Asia/Pacific and Central and Eastern Europe. (See app. III for available information on the regional concentration of major public insurance providers.) Investors are also assisted by other Berne Union members, including the Multilateral Investment Guarantee Agency, a multilateral institution affiliated with the World Bank Group, with about $3.9 billion in exposure reported in 1997. The level of coverage of privately provided political risk insurance has increased considerably over the past 2 years, according to the private insurers we spoke with. Although the volume of coverage provided by private insurers is difficult to determine, a political risk insurance expert estimated that several billion dollars of private political insurance coverage was provided in 1996. According to the American Insurance Group, one of the largest private providers of political risk insurance, it increased the length of its coverage from a maximum of 3 years to a cap of 7 years in 1996. Additionally, ACE, Inc., a private insurance provider, recently entered into a reinsurance contract with the Multilateral Investment Guarantee Agency, providing up to 15 years of risk coverage on the same terms as that agency. However, according to officials of a large commercial bank and a private political risk insurer, in some risky markets private insurers are only willing to provide insurance when a public sector entity is involved in the project. A private insurer we spoke to said his company had not provided coverage in Russia and most of the other newly independent states of the former Soviet Union. Public and private sources also provide financing in developing countries. Public providers include OPIC; the International Finance Corporation, a multilateral institution affiliated with the World Bank Group; the U.S. Eximbank; and other bilateral credit agencies, such as the Japanese Export-Import bank. Private sector financing to developing countries is available through commercial banks and other private financial institutions. According to the World Bank, this source of financing has increased significantly during the 1990s, with about one-half of these resources directed toward project financing for infrastructure development. Investors’, private lenders’, and insurers’ perception of risk frames how projects are structured and financed. The risks assumed and the type of support sought by investors can differ by project and by sector. For example, based on the projects identified in our survey, more telecommunications projects were completed without public support and with investor self-insurance than were power projects. Power plants are costly and can take 10 years or longer to recoup the investment costs, according to an energy firm official we interviewed, making plant assets and income subject to long-term political risks. Telecommunications projects, on the other hand, may generate enough income to cover investment costs in just a few years. Investors we surveyed told us that over the past decade, several Latin American, East Asian, and East European countries have taken steps to create environments attractive to investors. Specifically, 22 of the 34 firms we spoke to were comfortable with assuming investment risks after they had been successful in a country for a period of time. For example, one telecommunications company that is developing cellular telephone operations in Hungary told us that the availability of OPIC political risk insurance was a critical factor in its initial decision to invest $200 million when privatization allowed the company to enter the market. After 2 years, however, the company reassessed the political and economic risks of this investment and decided to drop its OPIC insurance in favor of self-insurance. A company with 10 projects in Poland told us that it developed 9 cable projects with private investment after completing 1 successful project in Poland that was financed by OPIC 5 years ago when private financing was not available. In another example, a power company that has used OPIC in other high-risk markets has made acquisitions of privatized public utilities in Argentina and Chile without official support by obtaining financing from European financial markets and locally syndicated money. Officials of the International Finance Corporation confirmed that investors are increasingly likely to cancel International Finance Corporation loans as lower-priced private financing becomes more available in lower-risk markets. Despite these trends, some markets are still considered high risk by investors, lenders, and private insurance companies. Thus, obtaining commercial finance and insurance in these markets remains difficult, according to private firms we surveyed. Several of the power and telecommunications companies we surveyed concurred with the assessment that in several regions of the world, including Africa, Russia, the other newly independent states of the former Soviet Union, and Central America, the perception of risk remains high. Some companies told us that they are generally unable to raise the necessary financing for transactions in high-risk countries without public support. For example, four firms that we spoke to that invested in Russia or Ukraine said that private finance was unavailable for their projects. One telecommunications company with investments in Russia and Ukraine stated that without OPIC political risk insurance, it would have avoided these high-risk markets. A power company with a $150-million equity investment in El Salvador covered by OPIC political risk insurance told us that the availability of OPIC services was a key factor in the company’s decision to invest in the country. According to an official from this company, although it considers Guatemala to have great potential for the industry, private financial institutions and insurance companies still consider Guatemala to be high risk, and the company will not go forward with projects in Guatemala without OPIC or other public support. Additionally, private lenders and insurance companies we spoke with told us that they offer limited, if any, services in higher-risk markets such as the newly independent states of the former Soviet Union. Officials at the major international banks we visited noted that they are reluctant to lend in high-risk markets without some form of political risk insurance and that the private insurance companies often cannot provide the kind of insurance lenders need in these markets. In countries where OPIC services are not available due to U.S. foreign policy or operational reasons, such as Mexico, China, Pakistan, and Vietnam, we found that most of the U.S. investors we interviewed often seek other forms of public support to facilitate investment. As is the case in other emerging markets, investors’ decisions to invest in a project were predicated on their perceived risk. Our survey of U.S. investors showed that when U.S. firms believed they needed public investment support in a non-OPIC country, they sought investment support from the U.S. Eximbank or other foreign export credit agencies or multilateral financial institutions. Although such support facilitates the original investment, subsequent equipment and service procurements are often tied to the countries providing the support. Thus, if foreign export credit agencies provide the support, U.S. suppliers could be excluded. In some non-OPIC markets, such as Mexico, U.S. investors may not always seek public support. According to a telecommunications company official, several risk mitigation factors enabled the company to make a $1-billion investment in Mexico without political risk insurance or other official participation in the project. Mexico’s historical and geographical relationship to the United States, trends in Mexico’s economic performance, the potential for free trade, and the contractual commitment of high-level government officials and the Mexican Central Bank, along with the company’s confidence in its Mexican partner, all helped lower the company’s perception of risk. In contrast, a $644-million power project in Mexico is being undertaken by U.S. investors facilitated by a $477-million U.S. Eximbank loan, $28 million in U.S. Eximbank political risk insurance during construction, and a $75-million Inter-American Development Bank loan. In China, companies have entered into joint ventures with local companies that are affiliated with provincial governments, which lowers investor perception of risk. Depending on the size of the project, these companies were more likely to obtain a portion of their financing from multilateral institutions or foreign official sources. For example, one power company with several recent joint ventures in China financed smaller-sized projects (under $30 million) without public support. However, the same company is finalizing a $1.6-billion project and is obtaining support from the U.S. Eximbank and Hermes, Germany’s export credit agency. The opportunities presented by China’s large market potential may increase investors’ willingness to do business there despite the perceived risk. In other markets where OPIC is not available, the U.S. firms we surveyed have used the services of multilateral agencies or export credit agencies.One telecommunications company mitigated its risk in Pakistan by obtaining guarantees and political risk insurance from the International Finance Corporation and the Multilateral Investment Guarantee Agency. Because OPIC was not available in Vietnam, a U.S. power firm used the Asian Development Bank and Coface (the French export credit agency) to finance a $160-million power plant. U.S. investors’ use of investment support from sources other than OPIC may affect the source of procurements. Multilateral institutions generally do not tie their support to buying equipment from a particular country. However, some U.S. firms told us that they were unable to use U.S. suppliers when they obtained support from foreign export credit agencies. In testimony before Congress, an official of a large U.S. company testified that her company utilized or planned to use German, Japanese, or French equipment for projects in China, Pakistan, and Vietnam because the company obtained investment support from German, Japanese, and French export credit agencies. Historically, OPIC has been self-sustaining, generating substantial revenues from its finance and insurance programs and its investments that together have been sufficient to cover actual losses. As of September, 1996, OPIC had accumulated $2.7 billion in reserves. According to a February 1996 J.P. Morgan Securities, Inc., report, OPIC’s reserves are more than adequate to cover any losses that OPIC might experience, excluding an unprecedented disaster. OPIC’s risk management strategies, which include maintaining reserves, setting exposure limits, performing pre-approval reviews, and applying underwriting guidelines, help limit U.S. taxpayers’ exposure to undue risk and prevent project losses. In 1994, OPIC raised the maximum amount of insurance and finance coverage it offers on a given project, a step that increases the government’s exposure to loss but may not negatively affect the quality of OPIC’s portfolio. Notwithstanding OPIC’s track record, the private sector’s willingness to have greater involvement in some developing countries has created opportunities for OPIC to take steps to further reduce the risk associated with its portfolio through greater risk-sharing. Some possible options to explore include obtaining reinsurance from other providers, utilizing coinsurance, and insuring less than 90 percent of the value of each investment. Adoption of any of these options, however, should be carried out with due consideration of U.S. foreign policy objectives. Historically, OPIC has generated sufficient revenues from its insurance and finance programs to cover its operating costs and the losses associated with its portfolio. Since its inception through 1996, OPIC had about $500 million in insurance claims and recovered all but $11 million of this amount from the disposal of assets and recoveries from foreign governments. During the same period, OPIC has received over $922 million in premiums from its insurance activities. OPIC’s insurance revenues have exceeded its gross claims payments in all but 3 fiscal years, excluding recoveries that OPIC obtained after the claims were paid and liabilities were incurred but not reported. Also excluded is interest from Treasury securities. According to J.P. Morgan Securities, Inc., OPIC’s finance program has operated at a small loss or close to breaking even. Although OPIC’s cash revenues from its finance program have exceeded all cash losses from loans or loan guarantees since 1984, when operating costs and loan loss provisions are included, OPIC’s finance program shows a net operating loss for each year since 1993. If income from Treasury securities were allocated for each of these years, the finance program would show a net income. Under OPIC’s finance program, its direct loans, which by statute are only available to small businesses, have experienced higher rates of delinquencies and loan losses than its loan guarantees. Between 1984 and 1996, OPIC’s average direct loan loss rate was 4.4 percent; the loss rate was at its highest, at 11.7 percent, in fiscal year 1984. In the same time period, OPIC’s loan guarantee portfolio had an average loan loss rate of 0.56 percent for a combined rate (direct loans and loan guarantees) of 0.93 percent on average outstandings. OPIC’s finance program has been subject to the Federal Credit Reform Act of 1990, which became effective in fiscal year 1992. The act requires that government agencies, including OPIC, estimate and budget for the total long-term costs of their credit programs on a present value basis. Based on the required estimation of subsidy costs under credit reform, OPIC’s finance program will cost the government $72 million in fiscal year 1997 and total about $135 million between fiscal years 1992 and 1996. Historically, OPIC’s combined finance and insurance programs have been profitable and self-sustaining, including costs due to credit reform and administration. The J.P. Morgan Securities, Inc., report stated that OPIC’s finance program has operated at a small loss or close to breaking even and that much of OPIC’s profitability has come from interest earned on Treasury securities. This interest has accounted for over 60 percent of OPIC’s total revenue over the past 6 years. In fiscal year 1996, OPIC’s net income was $209 million, of which $166 million was interest on Treasury securities. From a governmentwide perspective, interest on Treasury securities held by OPIC represents transfers between two government agencies (that is, OPIC’s income from Treasury securities is a Treasury expense) that cancel each other out. From that perspective, OPIC’s net income from transactions with the private sector, that is, fees and premiums, amounted to about $43 million in fiscal year 1996. OPIC’s risk management strategy focuses on limiting OPIC’s maximum exposure to loss in any one country or sector. No single country accounts for more than 15 percent of OPIC’s portfolio, effectively protecting OPIC against the adverse consequences of catastrophic events in any one country. The purpose of risk diversification is to spread the risk of one transaction across a number of different transactions, thereby isolating OPIC against the risk of one “catastrophic event.” As shown in figure 2, OPIC’s portfolio is diversified across different regions of the world. Although OPIC seeks to diversify its portfolio, figure 2 shows that the countries of the Americas account for more than 40 percent of OPIC’s portfolio. This trend is explained by the fact that U.S. firms choose to use OPIC support in these markets. In general, OPIC’s portfolio is consistent with U.S. foreign direct investment in emerging markets. Figure 3 displays OPIC’s portfolio diversification by investment sector. OPIC’s risk management strategy also includes pre-approval review and underwriting guidelines that take into account some of the same factors other private and multilateral insurers use in evaluating projects. For example, a risk analysis is performed as part of OPIC’s insurance approval process, and a credit analysis is included in the finance approval process. OPIC officials said they consider the same factors that any commercial bank or insurance company would concerning the economics of a project under consideration for financing or insurance. Additionally, as of September 30, 1996, OPIC had accumulated over $2.7 billion in reserves as part of its risk management strategy. These reserves were raised from fees or premiums paid by users of OPIC’s services and from the investment of these funds in Treasury securities. OPIC’s reserves as a percentage of the total current exposure to claims have declined somewhat in recent years due to the rapid increase in the size of OPIC’s portfolio since 1994. The reserves as a percent of OPIC’s total outstanding exposure have declined from 41 percent in 1992 to 34 percent in fiscal year 1995. Despite this decline, J.P. Morgan Securities, Inc.’s, 1996 report on OPIC privatization concluded that these reserves are extremely large relative to exposure by private sector standards and compared to OPIC’s historical losses. Further, analysts at J.P. Morgan Securities, Inc., see the reserves as adequate to cover OPIC’s losses in all cases but an unprecedented disaster. In 1994, OPIC increased per project financing limits from $50 million to $200 million and insurance coverage from $100 million to $200 million per project. Although larger transactions increase the government’s contingent liabilities, large loans are not necessarily more risky than small loans. For example, 13 of the 14 loans currently in technical default or in a non-performing status at the end of fiscal year 1996 were loans made to small businesses and ranged in value from $328,000 to $12.5 million. In addition, OPIC data show that its direct loans have historically experienced more problems than its loan guarantees, which are mostly for high-value loans to large companies. However, for insurance transactions, higher project limits may or may not raise the overall level of risk for the portfolio. On the one hand, OPIC could be subject to larger claims if a foreign government, for example, were to expropriate an insured project. On the other hand, if OPIC’s past experience with claims were to continue, the government’s potential liability may be small. Since 1971, OPIC has recovered over 98 percent of the claims it has paid. We caution that OPIC’s past experience may not reflect future performance because OPIC has new exposure to losses in the newly independent states of the former Soviet Union, where it has had no previous experience. Furthermore, some countries in the region are considered to be very risky by the private insurers and bankers we spoke with. The private sector’s willingness to have greater involvement in some emerging markets has created opportunities for OPIC to further reduce risks in its insurance program. OPIC could share the risk of losses with the private sector, which has shown an interest in emerging markets. For example, OPIC could lower the risks associated with its portfolio through reinsurance, coinsurance, and by decreasing project coverage or terms. However, OPIC’s efforts to support U.S. foreign policy objectives, which promote investment in risky markets, present challenges for OPIC when considering ways to reduce the risks associated with its insurance portfolio. Under the reinsurance scenario, OPIC could consider insuring part of its high- and medium-risk portfolio with private sector insurance companies at premium rates that are mutually acceptable. For example, OPIC could enter into a contract with a large private insurer that would pay a specified percentage of any claims to OPIC. Care must be taken to ensure that the private insurer is not providing support exclusively for the lower-risk transactions and that OPIC retains enough of the reinsured premiums to cover its administrative costs. According to OPIC officials, OPIC had used portfolio re-insurance by the private sector as a mechanism for managing risk and stimulating U.S. private sector interest in providing risk insurance until the mid-1980s. The Grace Commission concluded that given OPIC’s low claims experience, there was no justification for the U.S. government to pay reinsurance premiums that exceeded claims payments collected from the reinsurers. After the Grace Commission’s study of OPIC’s reinsurance practices, the Office of Management and Budget directed OPIC to stop this practice because it was not cost-effective. OPIC officials told us that OPIC is currently in discussions with the Office of Management and Budget about the feasibility of once again pursuing portfolio reinsurance. As noted earlier, private political risk insurance companies are showing greater interest in emerging markets. This trend presents OPIC with opportunities to negotiate fee or premium arrangements that it would not have been able to negotiate in the past. Another risk mitigation strategy that OPIC may use is providing more coinsurance. It could coinsure a project with other private or public insurers in order to share the associated risks and premiums. In this case, the coinsurer would provide insurance that might or might not be identical to the type provided by OPIC that would permit both parties to provide a higher level and scope of coverage. For example, OPIC could provide $100 million of coverage on a $200-million project, while a private entity or a number of entities could provide the other $100 million of coverage. An insurance industry official has publicly stated that OPIC could leverage its resources by inviting the private sector to provide 50 percent of the insurance required on a project. However, OPIC officials said that the private sector’s reluctance to take long-term risk in risky markets limits its opportunity to pursue coinsurance. OPIC has documented only 12 of 1,392 contracts that it has coinsured with the private sector since 1988. A third risk mitigation strategy may be to reduce the coverage and terms of OPIC’s insurance program. OPIC currently offers standard 20-year insurance with 90 percent coverage of the value of the insured assets.One potential option would be that OPIC could insure less than 90 percent of the value of each investment. OPIC’s rationale for insuring 90 percent, rather than 100 percent, of the value of the assets is to ensure that the investor or project sponsor has an incentive to manage its assets prudently. Another option would be for OPIC to offer less than 20-year coverage. For example, rather than providing its current 20-year standard policy, OPIC could offer a standard 15-year term, as is the practice with other public insurers, and provide 20-year cover only in certain cases. Lastly, OPIC could require that the insured hold OPIC coverage for a minimum of 3 years. These measures would lower the value of assets covered, the length of coverage, and potentially the cost of coverage. Regarding the risk-sharing option, OPIC officials said that reducing the coverage level below 90 percent would have an adverse impact on small businesses and might lead U.S. investors to seek insurance support from foreign or multilateral sources that provide 90-percent coverage. They also noted that it might not be practical to make a project sponsor hold the coverage longer than he or she thinks is necessary or prevent him or her from seeking alternative sources of insurance. However, since a reduction in coverage is likely to come with a reduction in price, U.S. investors might continue to seek OPIC coverage. OPIC officials acknowledged that reinsurance, coinsurance, and greater risk sharing may be sound risk management options, but are not without trade-offs. For example, reinsurance may reduce OPIC’s income from premiums because OPIC would have to pay premiums to the reinsurer. Furthermore, OPIC takes on the credit risks of the reinsurer. The officials also stated that OPIC would need to maintain flexibility as to how and when to utilize these risk mitigation alternatives. The U.S. foreign policy objective of promoting private investment in developing countries encourages OPIC to take risks that the private sector may not take without public support. OPIC, the State Department, and other U.S. government officials consider OPIC to be a major tool for pursuing U.S. foreign policy goals. One major U.S. foreign policy goal is to assist Russia in its transition toward a free market economy. According to OPIC officials, by entering into OPIC’s bilateral agreement in 1992, Russia began to establish the conditions necessary for attracting private investment. Further, OPIC operates to promote development strategies that are consistent with internationally recognized worker rights. For example, OPIC ceased operations in the Republic of Korea in 1991, due to concerns over worker rights, including the arrest and imprisonment of labor union leaders. OPIC’s involvement in Russia was initially quite cautious, as it offered only coverage for expropriation and political violence. OPIC officials noted that as conditions improved in Russia, OPIC began offering coverage for currency inconvertibility risk. Since 1992, OPIC has accumulated a finance and insurance portfolio in Russia of $880 million and $1.6 billion, respectively. OPIC justifies its involvement in the high-risk markets of the former Soviet Union—currently 18 percent of its portfolio—by noting its central role in furthering the U.S. foreign policy objective of facilitating private investment in these markets. The private sector has tended to perceive the markets that OPIC operates in as risky, and private investors have often sought support from official sources when investing in these markets. According to OPIC officials, OPIC’s goal is to support deals that would not be made without its support, and OPIC as an agency of the U.S. government has access to risk mitigation tools, including advocacy and intervention to avert claims, that are not available to the private sector. This implies that OPIC would seek transactions that the private sector believes would be too risky without public support. If OPIC is to continue pursuing its mission, its portfolio will always be considered more risky than the portfolios of private sector insurers. OPIC’s authorizing legislation makes no provision for a phaseout process in the event the agency is closed. Any legislation shutting down OPIC should make clear whether OPIC’s portfolio should be moved to another agency or managed by a temporary organization until the portfolio expires. It could take as long as 20 years for OPIC’s portfolio to expire because many of OPIC’s insurance contracts run for 20 years, and OPIC had more than $5 billion in such contracts with 19-20 years remaining as of the end of fiscal year 1996. According to OPIC’s projections, about one-third of the portfolio would remain after 10 years. If the portfolio risk diminishes, Congress’ option to dispose of these assets is more viable. If Congress decides not to reauthorize OPIC, any shutdown legislation would need to address whether OPIC would continue to manage the portfolio during a phaseout period or whether the portfolio should be moved to another agency. If the portfolio is moved to another agency, Congress would need to decide if any OPIC employees would be moved with it to ensure an adequate and knowledgeable work force. According to Office of Management and Budget officials responsible for overseeing OPIC and related agencies, OPIC staff may be best suited to managing the portfolio because they are familiar with the portfolio. According to OPIC and private sector financial officials, OPIC’s portfolio could suffer losses if it is not properly managed, thereby increasing the cost of closing the agency. For example, a successor entity would need to monitor the construction of power and other projects, as well as political developments in host countries and the portfolio’s financial performance, to help prevent claims and/or defaults. Additionally, such an entity would need to perform OPIC’s administrative and legislatively mandated functions, including fee collection, repayment, environmental oversight, compliance with worker rights, and other monitoring to ensure that clients comply with their contractual agreements. According to OPIC officials, if finance projects encountered payment difficulties, an entity would also be needed to restructure the project and make collections where necessary. If a decision were made to move OPIC’s portfolio to another agency, the U.S. Eximbank would be the closest fit, according to Office of Management and Budget officials who are also responsible for overseeing the U.S. Eximbank. U.S. Eximbank officials also stated that their agency has many of the appropriate skills to do the job. The Eximbank officials cautioned, however, that their employees would not be familiar with the various monitoring requirements that OPIC carries out. They noted that OPIC is a foreign policy agency that provides development assistance while the U.S. Eximbank is an export promotion agency whose emphasis is on expanding U.S. exports. The U.S. Eximbank’s lack of familiarity with OPIC’s monitoring requirements would be less of an issue if OPIC staff were transferred to the U.S. Eximbank. Officials from three other agencies with responsibilities for overseeing loans or insurance obligations, or for encouraging and tracking U.S. investment in key overseas markets, all said that their agencies lack the business skills and resources necessary to manage OPIC’s portfolio. These agencies include the Departments of Commerce, State, and the Treasury. Office of Management and Budget officials concurred that their agency also lacks these skills and resources. In addition, officials from the Agency for International Development, the agency from which OPIC was created, said that their agency would not be well suited to managing OPIC’s portfolio because the agency (1) does not provide political risk insurance, (2) provides mostly grants, and (3) lends primarily to public entities (OPIC lends to the private sector). Regardless of whether OPIC’s portfolio is turned over to another agency, certain Office of Personnel Management rules would affect OPIC employee entitlements as he or she is separated from government service. These entitlements may include (1) retirement or severance pay, (2) unemployment compensation, (3) the dollar equivalent of unused annual leave, and (4) settlement from any pending equal employment opportunity or other labor-related litigation. According to officials of the Office of Personnel Management, if OPIC’s portfolio is moved to another agency, Congress would have to decide if any OPIC employees are to be moved with the portfolio. These officials said that reassignment of OPIC employees to another agency, under current Office of Personnel Management rules, would be temporary, lasting only until OPIC’s portfolio expires or the government disposes of the portfolio. If OPIC’s portfolio is moved to another agency, other issues might be considered for easing the transition. For example, a timetable could be established for transferring OPIC functions to the designated agency. In the absence of specific congressional direction, General Services Administration regulations would apply governing the disposal of OPIC’s property including the transfer of office furniture and equipment. In addition, OPIC said it has a commercial real estate lease that runs to June 30, 2007. A phaseout of OPIC would require ceasing new business as of a certain date. Also, a phaseout could take as long as 20 years. OPIC’s investment funds run for 10 years; its loans and guarantees, a maximum of 15 years; and its insurance policies, a maximum of 20 years. According to OPIC estimates, which assume a 10-percent annual drop in the declining remainder of the insurance portfolio due to both cancellations and policies ending at term, the agency’s potential exposure of $23 billion for all services would fall by 64 percent, to $8.2 billion, after 10 years. During the same period, OPIC estimates that its current staff of 200 would decrease by more than 70 percent to about 60 people as the portfolio diminishes. We compared OPIC’s assumptions concerning insurance cancellations and contracts ending at term to historical data and found these assumptions to be generally consistent with these data. According to OPIC, just under 10 percent of the original exposure would remain in the 20th year, with less than 8 percent of the staff needed to monitor it. The decline in OPIC’s portfolio is shown graphically in figure 4. The insurance portion of the portfolio is by far the largest, currently at just under $16 billion. This portion is about 3 times the value of the finance portion and almost 8 times that of the investment fund portion. In the 20th year, just the insurance portion would be left, having dropped by 86 percent to just over $2 billion (see fig. 4). Although the government may wish to divest OPIC’s portfolio before its expiration by selling it to the private sector, such a decision would need to account for the relative riskiness of OPIC’s portfolio and any discounts such a disposal would necessitate. According to a recent study, a privatization of OPIC’s current assets could only be accomplished at a discount. As OPIC’s portfolio matures during a phaseout, external factors may affect the riskiness of the portfolio, either negatively or positively, and thus any potential privatization discount. If existing economic and political trends continue in the markets where OPIC currently operates, OPIC’s portfolio may become less risky. With each year that passes, the length of the government’s obligation decreases and the insured as well as the government becomes more familiar with the risks and issues inherent in a given transaction. As stated earlier, OPIC’s clients tend to cancel their insurance coverage after a few years as they feel more comfortable with the political risks. On the other hand, OPIC’s portfolio may experience greater risk. In general, long-term transactions are riskier than similarly situated short-term loans, guarantees, or insurance transactions. Also, according to OPIC officials, cancellations are more likely to occur in the lower-risk segment of OPIC’s portfolio, thus making the portfolio riskier in the future than it is today. Either situation—less risk in the portfolio or greater risk—may occur. Regardless of the risk characteristics of the portfolio over time, OPIC’s portfolio will decrease. As the portfolio decreases, the amount of the discount will decrease for a given risk in the portfolio. If the quality of the portfolio improves as a result of improvements in OPIC markets, then the rate of discount will likely be much lower or even disappear. If, on the other hand, the portfolio becomes more risky over time, the rate of discount is likely to increase. Since the condition of this portfolio a decade or more from now is unclear, the government has the option of revisiting its choice to sell the portfolio if the risk is reduced. OPIC provided written comments on a draft of this report. OPIC generally agreed with the information and analyses in the report. In commenting on the draft, OPIC provided additional information to further clarify its view of (1) the role of the private sector, (2) risk mitigation opportunities, and (3) phaseout issues. OPIC also orally provided technical corrections and updated information that were incorporated throughout the report where appropriate. OPIC’s comments are reprinted in appendix VI, along with our evaluation of them. To identify trends in private sector investment in developing markets and the public sector’s role in these markets, we focused on various characteristics. Specifically, we obtained and analyzed World Bank data on the extent and types of private capital flows going to finance infrastructure and the trend of these flows over time. To identify the recent developments in the volume and types of investment support provided by the public and private sectors for investments overseas, we obtained and compared information from (1) five large private providers of political risk insurance; and (2) the largest public providers of investment support representing France, Germany, Japan, Canada, Italy, the United Kingdom, and the United States. (see app. II) and the Multilateral Investment Guarantee Agency. We also discussed with the Berne Union information on the nature of political risk insurance and the role and capability of the public and private sectors. We obtained total insurance exposure data directly from the Group of Seven (G-7)insurance providers. Regarding financing, we obtained information from major financial institutions that provide financing to U.S. investors, including the Chase Manhattan Bank and Citibank, and the International Finance Corporation. We also discussed the international finance environment with Standard & Poor’s Ratings Services and Moody’s Investors Service, two large financial rating agencies. An important component of our analysis of private sector investment was the identification of the kinds of investment services U.S. investors have utilized in various developing countries or economies in transition as well as countries in which OPIC is not open for business (for example, China and Mexico). To obtain this information, we surveyed a judgmental sample of 34 U.S. investors that had made major investments within the last 5 years in the power and telecommunications sectors. We selected the power and telecommunications sectors because they (1) are listed as the major sectors of growth in emerging markets and (2) represented two of the four largest sectors supported by OPIC. Since these sectors have considerably different resource requirements and risks, their inclusion allowed us to make several important distinctions regarding the investment environments in which they operate. To survey firms in the power and telecommunications sectors operating overseas, we (1) reviewed relevant literature including the Directory of American Firms Operating in Foreign Countries and U.S. Security and Exchange Commission data, (2) contacted appropriate Department of Commerce officials, (3) reviewed OPIC’s annual reports that list overseas investors, and (4) asked the firms contacted to identify their major competitors. We attempted to contact the 54 firms identified and successfully interviewed 34. We asked each firm to identify the projects it was involved in over the past 5 years, how these projects were structured, their views on the nature of the risks involved, and how it mitigated the risks. To determine OPIC’s risk management strategy and the steps that OPIC may take, if it is reauthorized, to further reduce portfolio risks while pursuing its objectives, we obtained and reviewed documents on OPIC’s risk assessment policies and financial reports that detailed the condition of OPIC’s portfolio. We also gathered and reviewed information on the risk assessment policies of two World Bank institutions (the Multilateral Investment Guarantee Agency and the International Finance Corporation), organizations that have programs comparable to OPIC’s insurance and finance programs. To support our analysis of these policies, we interviewed OPIC, Treasury, and State Department officials. Furthermore, we interviewed officers of private banks, investment institutions, and political risk insurance companies about steps that OPIC could pursue in reducing the risks associated with its portfolio. To determine the issues that would need to be addressed and the time it would take to phase out OPIC if it is not reauthorized, we reviewed laws and regulations and discussed applicable policies and practices with officials from the Office of Personnel Management, the General Services Administration, and the Office of Management and Budget. In addition, we reviewed our past work on the closure of the Resolution Trust Corporation and interviewed the Federal Deposit Insurance Corporation official responsible for managing the phaseout of the Resolution Trust Corporation. To determine how long it would take for OPIC’s obligations to expire, we obtained documents from OPIC on (1) its current financing and insurance obligations, (2) its insurance policy cancellation rates, and (3) its projections on the duration of its existing portfolio and the resources it would require to manage the portfolio. To assess the reasonableness of these projections, we reestimated OPIC’s analysis using a higher projected phaseout rate. With regard to which agency might be best suited to manage OPIC’s existing portfolio until the obligations expire, we interviewed officials from the Agency for International Development, the Commerce Department, the U.S. Eximbank, the National Economic Council, the Office of Management and Budget, OPIC, the State Department, and the Treasury Department. We also obtained Office of Personnel Management documents showing job classifications at OPIC and two other agencies—the Agency for International Development and the U.S. Eximbank. We conducted our review from January 1997 to July 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to appropriate congressional committees and the President and Chief Executive Officer of the Overseas Private Investment Corporation. We will also make copies available to other interested parties upon request. This review was done under the direction of JayEtta Z. Hecker, Associate Director. If you or your staff have any questions concerning this report, please contact Ms. Hecker at (202) 512-8984. Major contributors to this report are listed in appendix VII. AES Corporation Coastal Power Energy CalEnergy Company, Inc. CMS Energy Corporation Constellation Power, Inc. Dominion Resources, Inc. Duke Energy International, Inc. Enron International GE Capital Corporation GPU International, Inc. Houston Industries Energy, Inc. Edison Mission Energy Ogden Energy Group, Inc. TECO Power Services Corporation El Paso Energy International The Wing Group Ltd. Co. Adelphia Communications International African Communications Group, Inc. Ameritech Corporation Andrew Corporation BellSouth Corporation Comcast Corporation Chase Enterprises D & E Communications, Inc. GTE Service Corporation Hungarian Telephone & Cable Corporation Lucent Technologies, Inc. MCT of Russia, L.P. Millicom International Cellular, S.A. Motorola, Inc. Radiomovil Digital Americas, Inc. Telecel International, Inc. SBC Communications Inc. US WEST International Holdings, Inc. Legal entities registered in France. No restrictions. No limit. 15 years. Expropriation, war, inconvertibility, breach of government commitments. Domestic German entities. No restrictions. No limit. 15 years.Expropriation, war, inconvertibility, breach of government contracts. Persons and entities existing in Japan. No restrictions. $500 million per project. 15 years.Expropriation, war, inconvertibility, bankruptcy after 2 years of operation. Persons or business beneficial to Canada. No restrictions. No limit. 15 years. Expropriation, war, inconvertibility. Persons or entities domiciled in Italy. Developing countries only. No limit. 15 years. Expropriation, war, inconvertibility, natural catastrophe. Persons and entities carrying on business in United Kingdom. No restrictions. No limit. 15 years extendable to 20. Expropriation, war, inconvertibility, breach of contract by host government. U.S. citizens and entities and foreign entities 95% owned by U.S. interests. Developing countries only. $200 million per project. 20 years. Expropriation, war, inconvertibility, breach of contract by host government. OPIC data as of September 30, 1996. The following are GAO’s comments on OPIC’s letter dated August 6, 1997. 1. The points that OPIC highlights are there own interpretation of our analyses. Several points discussed by OPIC, such as the health of their reserves, filling a commercial void and the impact of its activities on U.S. employment, are not our specific conclusions. Rather, the report provides factual information and our analysis of the trends in private sector investment, the public sector’s role in emerging markets, OPIC’s portfolio and risk management strategy, and issues to be addressed if OPIC were not reauthorized. 2. Information in the report on OPIC’s risk management strategy is not restricted to a discussion of how OPIC limits exposure in any one country or sector. The report also includes a discussion of OPIC’s pre-approval review process and underwriting guidelines. Appendixes IV and V contain information on the application, approval, and monitoring processes for the insurance and finance programs. 3. Although the report notes that the larger finance projects tend to be less risky than smaller projects, we do not agree that the same is necessarily true for OPIC’s insurance projects. Financing involves commercial risks that well-capitalized and experienced private participants have greater influence in mitigating. However, political risk insurance only covers actions taken by governments—actions that are less within the control of the private sector. 4. The report discusses only the recent growth in privately provided political risk insurance. The extent to which the private market capacity for political risk insurance would be affected by changes in demand for property/casualty coverage is not certain. 5. We recognize that OPIC has in some cases pursued the risk mitigation options discussed in the report. However, we believe that the private sector’s current high level of interest in investing in emerging markets has created opportunities for OPIC to further reduce portfolio risk through greater use of the options presented. 6. The report provides OPIC data that show 18 (now 12) cases since 1988 in which OPIC coinsured with the private sector. Although there may be other cases in which the private sector provided insurance to investors also insured by OPIC, this information is more anecdotal and these instances would not represent cases in which OPIC formally sought to coinsure with the private sector. 7. We revised the report to reflect that any loss that was covered by a drawdown in reserves (that are comprised of Treasury securities) would become a budgetary outlay. However, we do not agree that such an outlay should then be compared to the offsetting collections that OPIC receives. If it were necessary for OPIC to redeem Treasury securities, then it would need more cash to cover losses than it would be taking in. 8. The report states that under the Federal Credit Reform Act of 1990, agencies are to estimate and budget for long-term costs of their credit programs on a present value basis. Subsidy costs arise when the estimated program disbursements by the government exceed the estimated payments to the government on a present value basis. The subsidy cost data in our report are based on OPIC’s reported estimates. In order to show lower subsidy costs, the costs must be reestimated, with key factors such as the credit risk of the borrowing country showing improvement. OPIC identified $72 million in subsidy costs for fiscal year 1997 programs. With regard to OPIC’s statement about its interest earnings, only those earnings properly allocable to its credit program are relevant to the discussion of its credit subsidy estimates. Under credit reform requirements, interest earned on credit-related reserves is required to be included in estimating the subsidy cost. 9. We modified the report to include this information. 10. We modified the report to include this information. Tom Zingale The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed: (1) trends in private sector investment in developing markets and the role of the public sector in these markets; (2) the Overseas Private Investment Corporation's (OPIC) risk management strategy and the steps that OPIC may take, if it is reauthorized, to further reduce portfolio risks while pursuing U.S. foreign policy objectives; and (3) the issues to be addressed and the time it would take to phase out OPIC if it is not reauthorized. GAO noted that: (1) improvements in economic and political conditions in many developing countries have led to a reduction in investors' perception of risk and a dramatic increase in private investment in these markets since the late 1980s; (2) however, according to most of the 34 firms GAO surveyed, risky markets still exist where the private sector stated they are reluctant to invest or operate without public guarantees or insurance; (3) in high-risk markets, U.S. investors GAO spoke with have sought public finance or insurance from OPIC, the Export-Import Bank, or other public institutions; (4) in risky markets where OPIC services are not available, U.S. investors tended to use other public support; (5) if foreign export credit agencies provide the support, U.S. suppliers could be excluded; (6) OPIC has historically been self-sustaining by generating revenues from its insurance and finance programs to cover actual losses; (7) OPIC's risk mitigation strategy includes maintaining reserves, limiting its exposure in any one country, requiring pre-approval reviews, and establishing underwriting guidelines; (8) nonetheless, the private sector's willingness to have greater involvement in some emerging markets has created opportunities for OPIC to further reduce portfolio risks, while continuing to pursue U.S. foreign policy objectives; (9) possible ways for OPIC to minimize the risks associated with its insurance portfolio include obtaining to a greater extent reinsurance from or coinsuring with other insurance providers, insuring less than 90 percent of the value of each investment, and offering insurance at less than a 20-year term; (10) while OPIC officials agree that these are good risk mitigation techniques, they cautioned that these strategies should be employed on a case-by-case basis so as to enable OPIC to continue to meet U.S. foreign policy objectives and the needs of its customers; (11) if Congress decides not to reauthorize OPIC, an orderly phaseout of the agency would require specific legislative action; (12) an important issue that would need to be addressed is who would manage the existing portfolio; (13) also, given that OPIC issues insurance policies with 20-year coverage, it could take up to 20 years for OPIC's existing obligations to expire; (14) the government has the option to sell OPIC's portfolio to the private sector before its expiration; (15) however, a recent study suggests that disposal of OPIC's assets could only be accomplished at a discounted price; and (16) if the risk of the remaining portfolio decreases over time, opportunities for asset disposal may arise.
You are an expert at summarizing long articles. Proceed to summarize the following text: North Korea is an isolated society with a centrally planned economy and a centrally controlled political system. The governing regime assumed power after World War II. Successive generations of a single family have ruled North Korea since its founding. According to the CIA World Factbook, under dictator Kim Jong Un, the grandson of regime founder Kim Il Sung, the regime currently controls all aspects of political life, including the legislative, judicial, and military structures. According to a Library of Congress country study, the North Korean leadership rewards members of the primary political party (the Korean Workers’ Party) and the military establishment with housing, food, education, and access to goods. Much of the population, however, lives in poverty, with limited education, travel restrictions, a poor health care system, no open religious institutions or spiritual teaching, and few basic human rights. North Korea exports commodities such as minerals, metallurgical products, textiles, and agricultural and fishery products. According to the CIA World Factbook, the North Korean economy is one of the world’s least open economies. The CIA World Factbook reported that as of 2012, its main export partners were China and South Korea. China is North Korea’s closest ally and accounts for almost two-thirds of its trade. North Korea has engaged in a number of acts that have threatened the security of the United States and other UN member states. Since 2006, North Korea has conducted a number of missile launches and detonated three nuclear explosive devices; torpedoed a South Korean naval vessel, the Cheonan, killing 46 crew members; and launched a disruptive cyberattack against a U.S. company, Sony Pictures Entertainment. In response to these actions, the United States and the UN imposed sanctions specific to North Korea from 2006 through 2015 (see fig. 1). The United States has imposed sanctions on North Korea and North Korean persons under EOs and a number of laws and regulations. EOs are issued by the President and generally direct the executive branch to either carry out actions or clarify and further existing laws passed by Congress. Administrations have invoked authority provided by the International Emergency Economic Powers Act, as well as other authorities, to issue EOs specific to North Korea. The UN Security Council issued five UNSCRs imposing sanctions specific to North Korea during this time period. (See fig. 1.) U.S. EOs specific to North Korea and the Iran, North Korea, and Syria Nonproliferation Act (INKSNA) authorize the United States to impose sanctions targeting activities that include weapons of mass destruction proliferation, trade in arms and related materiel, and transferring luxury goods. Sanctions that can be imposed pursuant to the EOs and INKSNA include blocking property and banning U.S. government procurement. UNSCRs target similar activities, and under the UN Charter, all 193 UN member states are required to implement sanctions imposed by the UNSCRs, such as travel bans, on North Korean and other persons involved in these activities. U.S. EOs specific to North Korea and INKSNA authorize the United States to impose sanctions targeting activities that include involvement in North Korean WMD and conventional arms proliferation and transferring luxury goods to North Korea. The most recent EO targets a person’s status as opposed to a person’s conduct. The EO targets a person’s status by authorizing the imposition of sanctions on persons determined, for example, to be agencies, instrumentalities, or controlled entities of the government of North Korea or the Workers’ Party of Korea. Table 1 provides examples of the activities and statuses targeted by EOs and INKSNA.registration of a vessel in North Korea by a U.S. person, and EO 13570 In addition, EO 13466 prohibits activities such as the generally prohibits a U.S. person from importing North Korean goods, services, or technology from North Korea. Sanctions that can be imposed pursuant to the EOs and law listed above include blocking property and interests in property in the United States, and banning U.S. government procurement and assistance. The EOs listed in table 1 create a framework within which the executive branch can decide when to impose sanctions against specific persons within the categories established by the EOs, according to Treasury and State officials. Treasury officials informed us that the process of determining whether to impose sanctions on one or more persons is (1) the result of a process wholly under the executive branch, and (2) driven by policy directives that prioritize issues of concern for the agencies. Treasury officials also noted that while Treasury does not consider itself to have discretion on whether or not to implement an EO, there is discretion at the interagency level regarding what sanctions programs should be focused on for individual designations, and how resources should be allocated among all relevant programs. INKSNA requires the President to provide reports every 6 months to two congressional committees that identify every foreign person with respect to whom there is credible information indicating that the person, on or after the dates specified in the act, has transferred to, or acquired from, North Korea, Syria, or Iran certain items listed by multilateral export control regimes, or certain nonlisted items that could materially contribute to weapons of mass destruction systems or cruise or ballistic missile systems. It does not require the President to sanction those persons, although it does require him or her to notify the congressional committees if he or she opts not to impose sanctions, including a written justification that supports the President’s decision not to exercise this authority. The President has delegated INKSNA authorities to the Secretary of State. State refers to section 73 of the Arms Export Control Act and section 11B of the Export Administration Act collectively as the Missile Sanctions laws. See 22 U.S.C. § 2797b and 50 U.S.C. App. § 2410b. Macao bank (Banco Delta Asia SARL).facilitation of financial transactions conducted by North Korean– related accounts that related to money laundering and illicit activities, including trade in counterfeit U.S. currency, counterfeit cigarettes, and narcotics, as grounds for its action. Five UNSCRs target North Korean–related activities that include WMD proliferation, cash transfers, and trade in luxury goods to North Korea (see table 2). Under the UN Charter, all 193 UN member states are required to implement sanctions in the UNSCRs that include imposing an arms embargo, prohibiting travel, and freezing assets. State officials told us that UN sanctions can amplify U.S. development of bilateral sanctions specific to North Korea, and that the United States has imposed sanctions beyond those required by UNSCRs. According to State officials, the United States has implemented the sanctions within the UNSCRs, pursuant to authorities including the United Nations Participation Act of 1945. U.S. officials informed GAO that obtaining information on North Korean persons has hindered the U.S. interagency process for imposing sanctions, and that a recent EO has provided them with greater flexibility to sanction persons based on their status as government or party officials rather than evidence of specific conduct. EO 13687 allows State and Treasury to sanction persons because they are officials of the North Korean government or of the Worker’s Party of Korea, instead of based on specific conduct. State and Treasury impose sanctions following an interagency process that involves reviewing intelligence and other information to develop evidence needed to meet standards set by U.S. laws and EOs, vetting possible actions within the U.S. government, determining whether and when to sanction, and announcing sanctions decisions. Since 2006, the United States has imposed sanctions on 86 North Korean persons, including 13 North Korean government officials and entities, under EO 13687. Commerce is the U.S. government agency that controls exports by issuing licenses for shipping goods that are not prohibited to North Korea. Agency officials cited obtaining sufficient information about North Korean persons to be their greatest challenge in making sanctions determinations. Most North Korea–specific sanctions authorities require a determination that a person engaged in a specific activity. Officials said that for sanctions to be effective, financial institutions need a minimum set of identifying information so that they can ensure they are blocking the right person. However, officials said that gathering information on the activities of North Korean persons and personal identifying information can be difficult because of the nature of North Korean society, whose citizens are tightly controlled by the government. Without sufficient information, the United States could mistakenly designate and therefore block the assets of the wrong person, particularly one with a common surname. State officials also cited obtaining sufficient information as a challenge to North Korean sanctions implementation, especially if the sanctions authority requires information indicating that the foreign person knowingly engaged in sanctionable activities. Officials in both agencies also said that they face challenges in obtaining information that can be made public in the Federal Register. Sony Cyberattacks On November 24, 2014, Sony Pictures Entertainment experienced a cyberattack that disabled its information technology, destroyed data, and released internal e-mails. Sony also received e-mails threatening terrorist attacks on theaters scheduled to show a film, The Interview, which depicted the assassination of Kim Jong Un. The Federal Bureau of Investigation and the Director of National Intelligence attributed these cyberattacks to the North Korean government. State and Treasury officials informed us that EO 13687, issued on January 2, 2015, gives them greater flexibility to impose sanctions despite the lack of complete information about persons’ activities. Treasury officials noted that sanctions under EO 13687 are status-based rather than conduct-based, which means that the EO allows agencies to sanction persons, for example, based on their status as North Korean government officials, rather than on their engagement in specific activities. EO 13687 allows Treasury to designate persons based solely on their status as officials, agencies, or controlled entities of the North Korean government, and to designate other persons acting on their behalf or providing them with material support. According to Treasury, EO 13687 represents a significant broadening of Treasury’s authority to increase financial pressure on the North Korean government and to further isolate North Korea from the international financial system. The White House issued the EO in response to North Korean cyberattacks on Sony Pictures Entertainment in November and December 2014. Treasury officials also noted that although the new authority allows them to target any North Korean government official, they continue to target activities prohibited under current sanctions, such as WMD proliferation. Treasury and State officials informed us that they have established processes to determine when and if the United States should impose sanctions related to North Korea. The processes involve reviewing evidence to identify sanctions targets, ensuring that they have adequate evidence to sanction, and imposing and publicizing the sanctions. (See fig. 2.) For North Korea–specific sanctions that fall under Treasury’s jurisdiction, Treasury officials said they investigate and collaborate with other U.S. government agencies to identify specific targets. The Office of Foreign Assets Control investigates the target’s activities and communicates with Treasury and other agency officials about the potential target. Where appropriate, Treasury will notify foreign authorities of the activities of the targeted person and seek commitment to stop the activity. State’s Bureau of International Security and Nonproliferation’s Office of Counterproliferation Initiatives leads an interagency process to evaluate whether a person’s activities are potentially sanctionable under EO 13382, which targets proliferation of weapons of mass destruction. The Office of Missile, Biological and Chemical Nonproliferation, also under the Bureau of International Security and Nonproliferation, leads the process for INKSNA, EO 12938, and the Missile Sanctions laws. The process begins with four State-led interagency working groups responsible for coordinating nonproliferation efforts involving (1) chemical and biological weapons, (2) missile technology, (3) nuclear technology, and (4) advanced conventional weapons. Each working group is chaired by a State official and consists of representatives from several U.S. government departments and agencies such as the Departments of Defense, Commerce, Homeland Security, Treasury, and Energy; the Federal Bureau of Investigation; and various intelligence community agencies. State officials said that the working groups regularly evaluate reports concerning proliferation-related activities and determine an appropriate response to impede activities of concern. As part of this review process, these groups identify transactions that may be sanctionable under various nonproliferation sanction authorities, including those related to North Korea. According to State and other working group officials, the interagency review process relies on criteria defined in the laws and EOs when assessing a transaction for the potential application of those sanctions. State officials also said the groups do not pursue sanctions for a target if they determine available information does not provide a basis for applying sanctions or is not legally sufficient. Officials in each agency said that they follow an evidence-based process to gain inter- and intra-agency consensus on imposing sanctions. At Treasury, Office of Foreign Assets Control officials said that they create an evidentiary record that contains the information they have gathered on a targeted person to present sufficient evidence that the person has engaged in sanctionable activity. The record contains identifying information such as date of birth, place of birth, or passport information, or if the targeted person is a company, the identifying information might be an address or telephone number. After the Office of Foreign Assets Control has approved this document, it is further reviewed for legal sufficiency by the Department of Justice, Department of State, and other relevant agencies. At State, the Offices of Counterproliferation Initiatives and Missile, Biological and Chemical Nonproliferation draft a statement of facts that provides a summary of intelligence available on a targeted transaction. Concurrently, State drafts a policy memo that explains the legal justification for the case. State circulates these documents internally and obtains advice from appropriate agencies and, in the case of actions targeted under EO 13382, consults with Treasury’s Office of Foreign Assets Control. Officials from the Offices of Counterproliferation Initiatives and Missile, Biological and Chemical Nonproliferation also said they circulate a decision memorandum to relevant stakeholders for approval. Officials at State and Treasury also told us that their process includes steps for making and announcing final sanctions determinations. At Treasury, the Office of Foreign Assets Control makes the final determination. Officials then publicize the sanctions in the Federal Register. At State, once the stakeholders have cleared the memorandum, the Offices of Counterproliferation Initiatives and Missile, Biological and Chemical Nonproliferation forward it to the Secretary of State or his or her designee for a final sanctions determination. They then prepare a report on imposed sanctions for publication in the Federal Register. When State or Treasury makes a determination that results in blocked assets, Treasury places the sanctioned person on the Specially Designated Nationals and Blocked Persons (SDN) list indicating that the person’s assets are blocked. Pursuant to regulation, U.S. persons, including banks, are required to block any assets of such persons that are in their possession or that come within their possession.consequence of the blocking, U.S. persons are generally prohibited from engaging in activities with the property or interests in property of persons As a on the SDN list. U.S. citizens are generally prohibited from doing business with individuals and persons on the SDN list.noted that persons’ status on this list does not expire, but persons may apply to be taken off the list. However, no North Korean person has asked for his or her name to be removed. Since 2006, the United States has imposed sanctions on 86 North Korean persons under five EOs, INKSNA, and Missile Sanctions laws (see table 3). The most frequently used EO during this time period was EO 13382, which, as noted above, is not specific to North Korea. Treasury imposed the most recent sanctions on North Korean persons in January 2015, in response to North Korea’s cyberattacks on Sony Pictures. In response, Treasury placed 10 North Korean individuals on the SDN list, and updated information about 3 persons on the list. State and Treasury have used EO 13382 most frequently—43 times in 10 years—to impose sanctions on North Korean persons that they found had engaged in activities related to WMD proliferation. For example, in March 2013, Treasury used EO 13382 to designate the following for sanctions: North Korea’s primary foreign exchange bank, which facilitated millions of dollars in transactions that benefited North Korean arms dealing. The chairman of the North Korean committee that oversees the production of North Korea’s ballistic missiles. Three North Korean government officials who were connected with North Korea’s nuclear and ballistic weapons production. According to the Federal Register notice, the United States imposed sanctions on these persons because State determined that they “engaged, or attempted to engage, in activities or transactions that have materially contributed to, or pose a risk of materially contributing to, the proliferation of WMD or their means of delivery (including missiles capable of delivering such weapons), including any efforts to manufacture, acquire, possess, develop, transport, transfer or use such items, by any person or foreign country of proliferation concern.” Commerce’s Bureau of Industry and Security requires those exporters who wish to ship items to North Korea to obtain a license for dual-use items that are subject to the Export Administration Regulations. Dual- use items are goods and technology that are designed for commercial use but could have military applications, such as computers and telecommunications equipment. In general, the Bureau of Industry and Security reviews applications for items requiring a license for export or reexport to North Korea and approves or denies applications on a case- by-case basis. According to the Bureau of Industry and Security, it will deny a license for luxury goods or any item that could contribute to North Korea’s nuclear-related, ballistic missile–related, or other WMD-related programs. Commerce officials informed us that they receive relatively few requests for licenses to export items to North Korea and in most of these cases Commerce issues a license because most of the applications are for humanitarian purposes. In 2014, the Bureau of Industry and Security approved licenses for items such as telecommunications equipment and medical devices, as well as water well–drilling equipment and volcanic seismic measuring instruments. Commerce does not require a license to export some items, such as food and medicine, to North Korea. Commerce officials informed us that, under the Export Administration Regulations, the Bureau of Industry and Security, in consultation with the Departments of Defense and State, will generally approve applications to export or reexport humanitarian items, such as blankets, basic footwear, and other items meeting subsistence needs that are intended for the benefit of the North Korean people. For example, it will approve items in support of UN humanitarian efforts, and agricultural commodities or medical devices that the Bureau of Industry and Security determines are not luxury goods. While UN sanctions have a broader reach than U.S. sanctions because all UN member states are obligated to implement and enforce them, the UN does not know the extent to which members are actually implementing its sanctions. The UN process for imposing sanctions on North Korea or related persons relies on a Security Council committee and a UN panel of experts that investigates suspected violations of North Korea sanctions and recommends actions to the UN. The panel has found North Korean persons using illicit techniques to evade sanctions and trade in arms and related material and has designated 32 North Korean or related entities for sanctioning since 2006, including a North Korean company found to be shipping armaments from Cuba to North Korea. However, while the UN calls upon member states to submit reports describing the steps or measures they have taken to implement effectively specified sanctions provisions, fewer than half have done so. According to UN and U.S. officials, many member states lack the technical capacity to develop the reports and implement sanctions. Member state delegates to the UN Security Council and U.S. officials agree that the lack of reports from all member states is an impediment to UN sanctions implementation. Member state delegates to the UN Security Council informed us that the UN has established a process to determine when and if to impose sanctions on persons that have violated the provisions of UNSCRs. The process involves the Security Council committee established pursuant to Security Council Resolution 1718 that oversees UN sanctions on North Korea; the Panel of Experts, which reviews information on violations of North Korea sanctions sent by member states and conducts investigations based on requests from the committee; and member states whose role is to implement sanctions on North Korea as required by various UN Security Council resolutions. (See fig. 3.) The UN established the committee in 2006. It consists of 15 members, including the 5 permanent members of the United Nations Security Council and 10 nonpermanent members. The committee makes all decisions by consensus and is mandated to seek information from member states regarding their actions to implement the measures imposed by UNSCR 1718. It is also mandated to examine and take action on information regarding alleged sanctions violations, consider and decide upon requests for exemptions, determine additional items to be added to the list of sanctioned goods, designate individuals and entities for sanctions, promulgate guidelines to facilitate the implementation of sanctions measures, and report at least every 90 days to the UN Security Council on its work overseeing sanctions measures set out in United Nations Security Council resolution 1718 on North Korea. The Panel of Experts was established in 2009 as a technical body within the committee. Pursuant to UNSCR 1874, the panel is tasked with, among other things, gathering, examining, and analyzing information regarding incidents of noncompliance with United Nations Security Council sanctions on North Korea. The panel was originally created for a 1-year period, but the Security Council extended the panel’s mandate in subsequent resolutions. The panel acts under the committee’s direction to implement its mandate to gather, examine, and analyze information from member states, relevant UN bodies, and other interested parties regarding North Korea sanctions implementation. The panel does not have enforcement authority and relies on the cooperation of member states to provide information that helps it with its investigations. The panel consists of eight subject matter experts from UN member states, including representatives from the council’s 5 permanent members. The Secretary General appoints panel members, who currently are from China, France, Japan, Russia, South Africa, South Korea, the United Kingdom, and the United States. According to the UN, these subject matter experts specialize in technical areas such as WMD arms control and nonproliferation policy, customs and export controls, finance, missile technology, maritime transport, and nuclear issues. According to a representative of the committee, panel members are not intended to represent their countries, but to be independent in order to provide objective assessments. According to UN guidance, the panel reviews public information and conducts investigative work on incidents or events, and consults foreign governments and seeks information beyond what member states provide them. Representatives of the U.S. Mission to the United Nations (USUN) informed us that the United States and other countries provide the panel with information to help facilitate investigations. The UN Security Council encourages UN member states to respond promptly and thoroughly to the panel’s requests for information and to invite panel members to visit and investigate alleged violations of the sanctions regime, including inspection of items that might have been seized by national authorities. Following investigations of suspected sanctions violations, the panel submits investigative reports (incident reports) to the committee detailing its findings and recommendations on how to proceed, according to UN guidance. The panel treats its incident reports as confidential and provides access only to committee and Security Council members. According to a representative of the committee, the committee considers the violations and recommendations and makes sanctions designations based on the consensus of committee members. According to a representative of the committee, if the committee does not reach consensus, it can refer the case to the UN Security Council, pending member agreement Ultimately, the UN Security Council determines whether or not recommended designations meet the criteria for sanctions, according to a representative of the committee. If the decision is affirmative, it takes action by making sanctions designations mostly through new resolutions. This process has resulted in 32 designations since 2006. All but one of these designations were made through new resolutions, according to a USUN official. For example, the committee designated the Ocean Maritime Management Company for sanctions through the committee process in July 2014. The panel is generally required, with each extension of its mandate, to provide the committee with an interim and final report, including findings and recommendations. The panel’s final reports have identified North Korea’s use of evasive techniques to export weapons. The panel’s 2014 final report described North Korea’s attempt to illicitly transport arms and related materiel from Cuba to North Korea concealed underneath thousands of bags of sugar onboard the North Korean vessel Chong Chon Gang. North Korea’s use of evasive techniques in this case was blocked by actions taken by Panama, a UN member state. Panamanian authorities stopped and examined the Chong Chon Gang vessel as it passed through Panama’s jurisdiction. After uncovering items on the vessel that it believed to be arms and related materiel, Panama alerted the committee of the possible UN sanctions violation. According to representatives of the committee, Panama cooperated with the panel as it conducted its investigation. The panel concluded that the shipment was in violation of UN sanctions and that it constituted the largest amount of arms and related materiel interdicted to North Korea since the adoption of UNSCR 1718. The committee placed the shipping company that operated the Chong Chon Gang on its sanctioned entities list. The panel’s investigations have also uncovered evidence of North Korea’s efforts to evade sanctions by routing financial transactions in support of North Korea’s procurement of sanctioned goods through intermediaries, including those in China, Malaysia, Singapore, and Thailand. For instance, in its investigation of the Chong Chon Gang case, the panel found that the vessel operator, North Korea’s Ocean Maritime Management Company, Limited, used foreign intermediaries in Hong Kong, Thailand, and Singapore to conduct financial transactions on its behalf. The panel also identified that in most cases the investigated transactions were made in United States dollars from foreign-based banks and transferred through corresponding bank accounts in the United States. The panel’s 2015 final report indicated that North Korea has successfully bypassed banking organizations’ due diligence processes by initiating transactions through other entities on its behalf. The panel expressed concern in its report regarding the ability of banks in countries with less effective banking regulations or compliance institutions to detect and prevent illicit transfers involving North Korea. The panel’s reports also reveal the essential role played by member states in implementing UN sanctions and that some member states have not been as well informed as others in working with the panel regarding sanctions implementation. For example, the panel discovered that the Ugandan government had contracted with North Korea to provide police force training. Ugandan government officials purported that they did not realize that UN sanctions prohibited this type of activity, according to a USUN official. The UN recognized the essential role that member states play when it called upon member states to submit reports on measures or steps taken to implement effectively provisions of specified Security Council resolutions to the committee within 45 or 90 days, or upon request by the committee, of the UN’s adoption of North Korea sanctions measures. UNSCRs 1718, 1874, and 2094, adopted in 2006, 2009, and 2013 respectively, call upon member states to report on the concrete measures they have taken in order to effectively implement the specified provisions For instance, a member state might report on how its of the resolutions.national export control regulations address newly adopted UN sanctions on North Korea. The United States has complied with UN reporting provisions calling on member states to submit implementation reports. U.S. implementation reports can be viewed on the committee’s website, at http://www.un.org/sc/committees/1718/mstatesreports.shtml. submitted one or more reports include member states with major international transit points (such as the United Arab Emirates) or that have reportedly been used by North Korea as a foreign intermediary (such as Thailand). The panel has expressed concern in its 2015 final report that 8 years after the adoption of UNSCR 1718, in 2006, a consistently high proportion of member states in some regions have not reported at all on the status of their implementation. It has also reported that some member states have submitted reports that lack detailed information, or were late, impeding the panel’s ability to examine and analyze information about national implementation. The panel has also reported that member states should improve their reporting of incidents of noncompliance with sanctions resolutions and inspections of North Korean cargo. Appendix III provides information on the status of member state implementation report submissions. U.S. officials and representatives of the committee agree that the lack of detailed reports from all member states is an impediment to the UN’s effective implementation of its sanctions. Through reviewing these reports, the committee uncovers gaps in member state sanctions implementation which helps the committee identify targets for outreach. The panel notes that the lack of detailed information in implementation reports impedes its ability to examine and analyze information regarding member state implementation and its challenges. It also states that member state underreporting increases North Korea’s opportunities to continue its prohibited activities. The panel will not have the information it needs to completely understand North Korea’s evasive techniques if it does not have the full cooperation of member states. U.S. officials and representatives of the committee told us that many member states lack the technical capacity to enforce sanctions and prepare reports. For instance, representatives of the committee told us that some member states may have weak customs and border patrol systems or export control regulatory structures because of the high resource requirements of these programs. In addition, representatives of the committee stated that some member states may lack awareness of the full scope of North Korea sanctions or may not understand how to implement the sanctions. Moreover, some countries may not make the sanctions a high priority because they believe they are not directly affected by North Korea. In addition, member states that are geographically distant from North Korea or lack a diplomatic or trade relationship with it may not see the need to implement the sanctions, according to representatives of the committee. The UN has taken some steps to address this impediment. The committee and the panel provide limited assistance to member states upon request in preparing and submitting reports. For example, the committee has developed and issued a checklist template that helps member states indicate the measures, procedures, legislation, and regulations or policies that have been adopted to address various UNSCR measures relevant to member states’ national implementation reports. A committee member indicated that the committee developed a list of 25 to 30 member states where outreach would most likely have an impact on reporting outcomes. The panel reported in its 2015 final report that it sent 95 reminder letters to the member states that have not submitted implementation reports, emphasizing the importance of submitting reports and that the panel is available to provide assistance. Despite the steps the UN has taken to help member states adhere to reporting provisions, the panel’s 2015 report continues to identify the lack of member states’ reports as an impediment. The panel stated that it is incumbent on member states to implement the measures in the UN Security Council resolutions more robustly in order to counter North Korea’s continued violations, and that while the resolutions provide member states with tools to curb the prohibited activities of North Korea, they are effective only when implemented. State Department officials informed us that the United States has offered technical assistance to some member states for preventing proliferation and implementing sanctions. However, they were unable to determine the extent to which the United States has provided specific assistance aimed at ensuring that member states provide the UN with the implementation reports it needs to assess member state implementation of UN sanctions on North Korea. North Korea’s actions pose threats to the security of the United States and other UN members. Both the United States and the UN face impediments to implementing the sanctions they have imposed in response to these actions. While the United States has recently taken steps to provide more flexibility to impose sanctions, and thereby possibly impose more sanctions on North Korean persons, the United Nations is seeking to address the challenge posed by many UN member states not providing the UN with implementation information. According to U.S. officials, many member states require additional technical assistance to develop the implementation reports needed by the panel. The lack of implementation reports from member states impedes the panel’s ability to examine and analyze information about member state implementation of North Korea sanctions. GAO recommends the Secretary of State work with the UN Security Council to ensure that member states receive technical assistance to help prepare and submit reports on their implementation of UN sanctions on North Korea. We provided a draft of this report to the Departments of State, Treasury, and Commerce for comment. In its written comments, reproduced in Appendix IV, State concurred with our recommendation. Treasury and Commerce declined to provide written comments. State, Treasury, and Commerce provided technical comments, which were incorporated into the draft as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretaries of State, Treasury, and Commerce, the U.S. Ambassador to the United Nations, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. The United States and the United Nations (UN) Security Council have imposed a wide range of sanctions against North Korea and Iran as part of their broader efforts to prevent the proliferation of weapons of mass destruction. Table 4 compares the major activities targeted by U.S. and UN sanctions on those countries. Officials from the Department of State, the Department of the Treasury, and other sources identified the following factors that may influence the types of sanctions imposed by the United States and the UN on these countries. Different political systems. North Korea is an isolated society that is under the exclusive rule of a dictator who controls all aspects of the North Korean political system, including the legislative and judicial processes. Though Iran operates under a theocratic political system, with a religious leader serving as its chief of state, Iranian citizens participate in popular elections for president and members of its legislative branch. Different economic systems. North Korea has a centrally planned economy generally isolated from the rest of the world. It exports most of its basic commodities to China, its closest ally. Iran, as a major exporter of oil and petrochemical products, has several major trade partners, including China, India, Turkey, South Korea, and Japan. Different social environments. North Korea’s dictatorship tightly controls the activities of its citizens by restricting travel; prohibiting access to the Internet; and controlling all forms of media, communication, and political expression. In contrast, Iranian citizens travel abroad relatively freely, communicate with one another and the world through the Internet and social media, and can hold political protests and demonstrations. This report (1) identifies the activities that are targeted by U.S. and United Nations (UN) sanctions specific to North Korea, (2) describes how the United States implements its sanctions specific to North Korea and examines the challenges it faces in doing so, and (3) describes how the UN implements its sanctions specific to North Korea and examines the challenges it faces in doing so. In appendix I, we compare U.S. and UN North Korea–specific sanctions with those specific to Iran. (See app. I.) To address our first objective, we reviewed U.S. executive orders and laws and UN Security Council resolutions issued from 2006 to 2015 with sanctions related to North Korea. We also interviewed officials from the Department of State (State), the Department of the Treasury (Treasury), and the UN to confirm the universe of North Korea–specific sanctions. We also interviewed these officials to determine any other executive orders, laws, or resolutions not specific to North Korea that they have used to impose sanctions on North Korea during this time period. We then analyzed the executive orders, laws, and resolutions to identify the activities targeted by the sanctions. To address our second objective, we interviewed State and Treasury officials to determine the process that each agency follows to impose sanctions on North Korea and related persons. We also spoke with State, Treasury and Commerce officials to identify the challenges that U.S. agencies face in implementing sanctions related to North Korea. We interviewed Department of Commerce (Commerce) officials to learn about how the U.S. government controls exports to North Korea. We analyzed documents and information from State and Treasury to determine the number of North Korean entities that have been sanctioned since 2006. To address our third objective, we reviewed UN documents and interviewed UN officials to determine the process that the UN uses to impose sanctions on North Korea and related entities. We reviewed United Nations security resolutions relevant to North Korea, 1718 Committee guidelines and reports, and Panel of Expert guidelines and reports. We interviewed relevant officials at the U.S. State Department and traveled to New York to visit UN headquarters and interview officials from the U.S. Mission to the United Nations and members of the UN 1718 Committee. We interviewed two former members of the Panel of Experts to obtain their views on the UN process for making North Korea sanctions determinations. We also reviewed the 1718 Committee’s sanctions list to determine the number of designations the UN has made on North Korean or related entities and the reasons for designating. For examples of how the Panel of Experts has investigated cases of sanctions violations and worked with member states through the investigation process, particularly related to the Cong Chon Gang case, we reviewed the panel’s final reports summarizing its investigation findings and interviewed members of the 1718 Committee involved in conducting the investigation. To determine the extent to which member states are submitting reports on their implementation of UN sanctions on North Korea, we examined the 1718 Committee’s record of member state implementation reports and interviewed 1718 Committee members. To identify the challenges the UN faces related to member state reporting and the efforts the UN has taken to help member states meet reporting provisions of the UN Security Council resolutions (UNSCR), we interviewed U.S. and UN officials, and reviewed 1718 Committee and Panel of Expert reports and documents. To examine the efforts the UN has taken to address the lack of member state reporting, we interviewed members of the UN 1718 Committee and reviewed documents outlining UN outreach efforts. To compare U.S. and UN sanctions specific to North Korea and Iran (see app. I), we reviewed U.S. executive orders, laws, and UN Security Council resolutions with sanctions specific to North Korea and Iran. We analyzed these documents to identify the activities targeted by the sanctions. On the basis of a comprehensive literature review, we developed a list of targeted activities frequently identified in relation to North Korea and Iran sanctions and grouped these activities into high- level categories. To ensure data reliability in categorizing the targeted activities into high-level categories, we conducted a double-blind exercise whereby each member of our team reviewed the activities identified within the U.S. executive orders and laws and UN resolutions for each country and assigned each activity to a high-level category, such as financial transactions with targeted persons. We then compared the results, discussed any differences and reconciled our responses to reach consensus, and developed a matrix to compare the targeted activities for North Korea sanctions with those of Iran sanctions. We interviewed State and Treasury officials to discuss the differences in activities targeted by North Korea and Iran sanctions. To develop appendix III, on United Nations member state implementation report submissions, we examined the UN 1718 Committee’s website record of member state implementation reports. The record of member state implementation reports allowed us to determine the number of member states that have either reported or not reported. Bolivia (Plurinational State of) Micronesia (Federated States of) Venezuela (Bolivarian Republic of) In addition to the contact named above, Pierre Toureille (Assistant Director), Leah DeWolf, Christina Bruff, Mason Thorpe Calhoun, Tina Cheng, Karen Deans, Justin Fisher, Toni Gillich, Michael Hoffman, and Grace Lui made key contributions to this report.
North Korea is a closely controlled society, and its regime has taken actions that threaten the United States and other United Nations member states. North Korean tests of nuclear weapons and ballistic missiles have prompted the United States and the UN to impose sanctions on North Korea. GAO was asked to review U.S. and UN sanctions on North Korea. This report (1) identifies the activities that are targeted by U.S. and UN sanctions specific to North Korea, (2) describes how the United States implements its sanctions specific to North Korea and examines the challenges it faces in doing so, and (3) describes how the UN implements its sanctions specific to North Korea and examines the challenges it faces in doing so. To answer these questions, GAO analyzed documents from the Departments of State, Treasury, and Commerce, and the UN. GAO also interviewed officials from the Departments of State, Treasury, and Commerce, and the UN. U.S. executive orders (EO) and the Iran, North Korea, and Syria Nonproliferation Act target activities for the imposition of sanctions that include North Korean (Democratic People's Republic of Korea) proliferation of weapons of mass destruction and transferring of luxury goods. The EOs and the act allow the United States to respond by imposing sanctions, such as blocking the assets of persons involved in these activities. United Nations (UN) Security Council resolutions target similar North Korean activities, and under the UN Charter, all 193 UN member states are required to implement sanctions on persons involved in them. U.S. officials informed GAO that obtaining information on North Korean persons has hindered the U.S. interagency process for imposing sanctions, and that EO 13687, announced in January 2015, provided them with greater flexibility to sanction persons based on their status as government officials rather than evidence of specific conduct. State and Treasury impose sanctions following an interagency process that involves: reviewing intelligence and other information to develop evidence needed to meet standards set by U.S. laws and EOs, vetting possible actions within the U.S. government, determining whether to sanction, and announcing sanctions decisions. Since 2006, the United States has imposed sanctions on 86 North Korean persons, including on 13 North Korean government persons under EO 13687. Although UN sanctions have a broader reach than U.S. sanctions, the UN lacks reports from many member states describing the steps or measures they have taken to implement specified sanctions provisions. The UN process for imposing sanctions relies on a UN Security Council committee and a UN panel of experts that investigates suspected sanctions violations and recommends actions to the UN. The Panel of Experts investigations have resulted in 32 designations of North Korean or related entities for sanctions since 2006, including a company found to be shipping armaments from Cuba in 2013. While the UN calls upon all member states to submit reports detailing plans for implementing specified sanctions provisions, fewer than half have done so because of a range of factors including a lack of technical capacity. The committee uses the reports to uncover gaps in sanctions implementation and identify member states that require additional outreach. The United States as a member state has submitted all of these reports. UN and U.S. officials agree that the lack of reports from all member states is an impediment to the UN's implementation of its sanctions. Source: GAO | GAO-15-485 GAO recommends the Secretary of State work with the UN Security Council to ensure that member states receive technical assistance to help prepare and submit reports on their implementation of UN sanctions on North Korea. The Department of State concurred with this recommendation.
You are an expert at summarizing long articles. Proceed to summarize the following text: Under the defined standard benefit in 2009, beneficiaries subject to full cost-sharing amounts paid out-of-pocket costs during the initial coverage period that included a deductible equal to the first $295 in drug costs, followed by 25 percent coinsurance for all drugs until total drug costs reached $2,700, with beneficiary out-of-pocket costs accounting for $896.25 of that total. (See fig. 1.) This initial coverage period is followed by a coverage gap—the so-called doughnut hole—in which these beneficiaries paid 100 percent of their drug costs. In 2009, the coverage gap lasted until total drug costs—including the costs accrued during the initial coverage period—reached $6,153.75, with beneficiary out-of-pocket drug costs accounting for $4,350 of that total. This point is referred to as the catastrophic coverage threshold. After reaching the catastrophic coverage threshold, beneficiaries taking a specialty tier-eligible drug paid 5 percent of total drug costs for each prescription for the remainder of the year. In addition to cost sharing for prescription drugs, many Part D plans also charge a monthly premium. In 2009, premiums across all Part D plans averaged about $31 per month, an increase of 24 percent from 2008. Beneficiaries are responsible for paying these premiums except in the case of LIS beneficiaries, whose premiums are subsidized by Medicare. We found that specialty tier-eligible drugs accounted for about 10 percent, or $5.6 billion, of the $54.4 billion in total prescription drug spending under Part D MA-PD and PDP plans in 2007. Prescriptions for LIS beneficiaries accounted for about 70 percent, or about $4.0 billion, of the $5.6 billion spent on specialty tier-eligible drugs under MA-PD and PDP plans that year. (See fig. 2.) The fact that spending on specialty tier-eligible drugs in 2007 was largely accounted for by LIS beneficiaries is noteworthy because e their cost sharing is largely paid by Medicare. their cost sharing is largely paid by Medicare. While only 8 percent of Part D beneficiaries in MA-PD and PDP plans who filed claims but did not use any specialty tier-eligible drugs reached the catastrophic coverage threshold of the Part D benefit in 2007, 55 percent of beneficiaries who used at least one specialty tier-eligible drug reached the threshold. Specifically, among those beneficiaries who used at least one specialty tier-eligible drug in 2007, 31 percent of beneficiaries responsible for paying the full cost sharing required by their plans and 67 percent of beneficiaries whose costs were subsidized by Medicare through the LIS reached the catastrophic coverage threshold. Most (62 percent) of the $5.6 billion in total Part D spending on specialty tier- eligible drugs under MA-PD and PDP plans occurred after beneficiaries reached the catastrophic coverage phase of the Part D benefit. For most beneficiaries—those who are responsible for paying the full cost- sharing amounts required by their plans—who use a given specialty tier- eligible drug, different cost-sharing structures can be expected to result in varying out-of-pocket costs during the benefit’s initial coverage period. However, as long as beneficiaries reach the catastrophic coverage threshold in a calendar year—as 31 percent of beneficiaries who used at least one specialty tier-eligible drug and who were responsible for the full cost-sharing amounts did in 2007—their annual out-of-pocket costs for that drug are likely to be similar regardless of their plans’ cost-sharing structures. During the initial coverage period, the estimated out-of-pocket costs for these beneficiaries for a given specialty tier-eligible drug are likely to vary, because some Part D plans may place the drug on a tier with coinsurance while other plans may require a flat copayment for the drug. For example, estimated 2009 out-of-pocket costs during the initial coverage period, excluding any deductibles, for a drug with a monthly negotiated price of $1,100 would range from $25 per month for a plan with a flat $25 monthly copayment to $363 per month for a plan with a 33 percent coinsurance rate. However, even if beneficiaries pay different out-of-pocket costs during the initial coverage period, their out-of-pocket costs become similar due to the coverage gap and the fixed catastrophic coverage threshold ($4,350 in out- of-pocket costs in 2009). (See fig. 3.) There are several reasons for this. First, beneficiaries taking equally priced drugs will reach the coverage gap at the same time—even with different cost-sharing structures—because entry into the coverage gap is based on total drug costs paid by the beneficiary and the plan, rather than on out-of-pocket costs paid by the beneficiary. Since specialty tier-eligible drugs have high total drug costs, beneficiaries will typically reach the coverage gap within 3 months in the same calendar year. Second, during the coverage gap, beneficiaries typically pay 100 percent of their total drug costs until they reach the catastrophic coverage threshold. This threshold ($4,350 in out-of-pocket costs) includes costs paid by the beneficiary during the initial coverage period. Therefore, beneficiaries who paid higher out-of-pocket costs in the initial coverage period had less to pay in the coverage gap before they reached the threshold. Conversely, beneficiaries who paid lower out-of- pocket costs in the initial coverage period had more to pay in the coverage gap before they reached the same threshold of $4,350 in out-of-pocket costs. Third, after reaching the threshold, beneficiaries’ out-of-pocket costs become similar because they typically pay 5 percent of the drug’s negotiated price for the remainder of the calendar year. For most beneficiaries—those who are responsible for paying the full cost- sharing amounts required by their plans—variations in negotiated drug prices affect out-of-pocket costs during the initial coverage phase if their plans require them to pay coinsurance. All 35 of our selected plans required beneficiaries to pay coinsurance in 2009 for at least some of the 20 specialty tier-eligible drugs in our sample. Additionally, negotiated drug prices will affect these beneficiaries’ out-of-pocket costs during the coverage gap and the catastrophic coverage phase because beneficiaries generally pay the entire negotiated price of a drug during the coverage gap and pay 5 percent of a drug’s negotiated price during the catastrophic coverage phase. As the following examples illustrate, there are variations in negotiated prices between drugs, across plans for the same drug, and from year to year. Variations between drugs: In 2009—across our sample of 35 plans— beneficiaries who took the cancer drug Gleevec for the entire year could have been expected to pay about $6,300 out of pocket because Gleevec had an average negotiated price of about $45,500 per year, while beneficiaries could have been expected to pay about $10,500 out of pocket over the entire year if they took the Gaucher disease drug Zavesca, which had an average negotiated price of about $130,000 per year. Variations across plans: In 2009, the negotiated price for the human immunodeficiency virus (HIV) drug Truvada varied from about $10,900 to about $11,400 per year across different plans with a 33 percent coinsurance rate, resulting in out-of-pocket costs that could be expected to range from about $4,600 to $4,850 for beneficiaries taking the drug over the entire year. Variations over time: Since 2006, average negotiated prices for the specialty tier-eligible drugs in our sample have risen across our sample of plans; the increases averaged 36 percent over the 3-year period. These increases, in turn, led to higher estimated beneficiary out-of-pocket costs for these drugs in 2009 compared to 2006. For example, the average negotiated price for a 1-year supply of Gleevec across our sample of plans increased by 46 percent, from about $31,200 in 2006 to about $45,500 in 2009. Correspondingly, the average out-of-pocket cost for a beneficiary taking Gleevec for an entire year could have been expected to rise from about $4,900 in 2006 to more than $6,300 in 2009. The eight Part D plan sponsors we interviewed told us that they have little leverage in negotiating price concessions for most specialty tier-eligible drugs. Additionally, all seven of the plan sponsors we surveyed reported that they were unable to obtain price concessions from manufacturers on 8 of the 20 specialty tier-eligible drugs in our sample between 2006 and 2008. For most of the remaining 12 drugs in our sample, plan sponsors who were able to negotiate price concessions reported that they were only able to obtain price concessions that averaged 10 percent or less, when weighted by utilization, between 2006 and 2008. (See app. I for an excerpt of the price concession data presented in our January 2010 report.) The plan sponsors we interviewed cited three main reasons why they have typically had a limited ability to negotiate price concessions for specialty tier-eligible drugs. First, they stated that pharmaceutical manufacturers have little incentive to offer price concessions when a given drug has few competitors on the market, as is the case for drugs used to treat cancer. For Gleevec and Tarceva, two drugs in our sample that are used to treat certain types of cancer, plan sponsors reported that they were not able to negotiate any price concessions between 2006 and 2008. In contrast, plan sponsors told us that they were more often able to negotiate price concessions for drugs in classes where there are more competing drugs on the market—such as for drugs used to treat rheumatoid arthritis, multiple sclerosis, and anemia. The anemia drug Procrit was the only drug in our sample for which all of the plan sponsors we surveyed reported that they were able to obtain price concessions each year between 2006 and 2008. Second, plan sponsors told us that even when there are competing drugs, CMS may require plans to include all or most drugs in a therapeutic class on their formularies, and such requirements limit the leverage a plan sponsor has when negotiating price concessions. When negotiating price concessions with pharmaceutical manufacturers, the ability to exclude a drug from a plan’s formulary in favor of a therapeutic alternative is often a significant source of leverage available to a plan sponsor. However, many specialty tier-eligible drugs belong to one of the six classes of clinical concern for which CMS requires Part D plan sponsors to include all or substantially all drugs on their formularies, eliminating formulary exclusion as a source of negotiating leverage. We found that specialty tier-eligible drugs were more than twice as likely to be in one of the six classes of clinical concern compared with lower-cost drugs in 2009. Additionally, among the 8 drugs in our sample of 20 specialty tier-eligible drugs for which the plan sponsors we surveyed reported they were unable to obtain price concessions between 2006 and 2008, 4 drugs were in one of the six classes of clinical concern. Plan sponsors are also required to include at least two therapeutic alternatives from each of the other therapeutic classes on their formularies. Third, plan sponsors told us that they have limited ability to negotiate price concessions for certain specialty tier-eligible drugs because they account for a relatively limited share of total prescription drug utilization among Part D beneficiaries. For some drugs in our sample, such as Zavesca, a drug used to treat a rare enzyme disorder called Gaucher disease, the plan sponsors we surveyed had very few beneficiary claims between 2006 and 2008. None of the plan sponsors we surveyed reported price concessions for this drug during this period. Plan sponsors told us that utilization volume is usually a source of leverage when negotiating price concessions with manufacturers for Part D drugs. For some specialty tier-eligible drugs like Zavesca, however, the total number of individuals using the drug may be so limited that plans are not able to enroll a significant enough share of the total users to entice the manufacturer to offer a price concession. The Department of Health and Human Services (HHS) provided us with CMS’s written comments on a draft version of our January 2010 report. CMS agreed with portions of our findings and suggested additional information for us to include in our report. We also provided excerpts of the draft report to the eight plan sponsors who were interviewed for this study and they provided technical comments. We incorporated comments from CMS and the plan sponsors as appropriate in our January 2010 report. Mr. Chairman, this completes my prepared remarks. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information about this statement, please contact John E. Dicken at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this statement in addition to the contact listed above were Will Simerl, Assistant Director; Krister Friday; Karen Howard; Gay Hee Lee; and Alexis MacDonald. Number of plan sponsors that obtained price price concessions, weighted by utilization (dollars) Drugs (including strength and dosage form), by indication utilization (dollars) Inflammatory conditions (e.g., rheumatoid arthritis, psoriasis, Crohn’s disease) Human immunodeficiency virus (HIV) Drugs (including strength and dosage form), by indication price concessions, weighted by utilization (dollars) utilization (dollars) Enzyme disorders (e.g., Gaucher disease) Other (selected based on high utilization) One of the seven plan sponsors we surveyed did not submit any data for this drug. Therefore, values listed for this drug are based on data submitted by six plan sponsors, rather than seven plan sponsors. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Centers for Medicare & Medicaid Services (CMS) allows Part D plans to utilize different tiers with different levels of cost sharing as a way of managing drug utilization and spending. One such tier, the specialty tier, is designed for high-cost drugs whose prices exceed a certain threshold set by CMS. Beneficiaries who use these drugs typically face higher out-of-pocket costs than beneficiaries who use only lower-cost drugs. This testimony is based on GAO's January 2010 report entitled Medicare Part D: Spending, Beneficiary Cost Sharing, and Cost-Containment Efforts for High-Cost Drugs Eligible for a Specialty Tier (GAO-10-242) in which GAO examined, among other things, (1) Part D spending on these drugs in 2007, the most recent year for which claims data were available; (2) how different cost-sharing structures could be expected to affect beneficiary out-of-pocket costs; (3) how negotiated drug prices could be expected to affect beneficiary out-of-pocket costs; and (4) information Part D plan sponsors reported on their ability to negotiate price concessions. For the second and third of these objectives, this testimony focuses on out-of-pocket costs for beneficiaries responsible for paying the full cost-sharing amounts required by their plans. GAO examined CMS data and interviewed officials from CMS and 8 of the 11 largest plan sponsors, based on enrollment in 2008. Seven of the 11 plan sponsors provided price concession data for a sample of 20 drugs for 2006 through 2008. High-cost drugs eligible for a specialty tier commonly include immunosuppressant drugs, those used to treat cancer, and antiviral drugs. Specialty tier-eligible drugs accounted for 10 percent, or $5.6 billion, of the $54.4 billion in total prescription drug spending under Medicare Part D plans in 2007. Medicare beneficiaries who received a low-income subsidy (LIS) accounted for most of the spending on specialty tier-eligible drugs-- $4.0 billion, or 70 percent of the total. Among all beneficiaries who used at least one specialty tier-eligible drug in 2007, 55 percent reached the catastrophic coverage threshold, after which Medicare pays at least 80 percent of all drug costs. In contrast, only 8 percent of all Part D beneficiaries who filed claims but did not use any specialty tier-eligible drugs reached this threshold in 2007. Most beneficiaries are responsible for paying the full cost-sharing amounts required by their plans. For such beneficiaries who use a given specialty tier-eligible drug, different cost-sharing structures result in varying out-of-pocket costs only until they reach the catastrophic coverage threshold, which 31 percent of these beneficiaries did in 2007. After that point, beneficiaries' annual out-of-pocket costs for a given drug are likely to be similar regardless of their plans' cost-sharing structures. Variations in negotiated drug prices can also affect out-of-pocket costs for beneficiaries who are responsible for paying the full cost-sharing amounts required by their plans. Variations in negotiated prices can occur between drugs, across plans for the same drug, and from year to year. For example, the average negotiated price for the cancer drug Gleevec across our sample of plans increased by 46 percent between 2006 and 2009, from about $31,200 per year to about $45,500 per year. Correspondingly, the average out-of-pocket cost for a beneficiary taking Gleevec for the entire year could have been expected to rise from about $4,900 in 2006 to more than $6,300 in 2009. Plan sponsors reported having little leverage to negotiate price concessions from manufacturers for most specialty tier-eligible drugs. One reason for this limited leverage was that many of these drugs have few competitors on the market. Plan sponsors reported that they were more often able to negotiate price concessions for drugs with more competitors on the market--such as for drugs used to treat rheumatoid arthritis. Two additional reasons cited for limited negotiating leverage were CMS requirements that plans include all or most drugs from certain therapeutic classes on their formularies, limiting sponsors' ability to exclude drugs from their formularies in favor of competing drugs; and that the relatively limited share of total prescription drug utilization among Part D beneficiaries for some specialty tier-eligible drugs was insufficient to entice manufacturers to offer price concessions. CMS provided GAO with comments on a draft of the January 2010 report. CMS agreed with portions of GAO's findings and suggested additional information for GAO to include in the report, which GAO incorporated as appropriate.
You are an expert at summarizing long articles. Proceed to summarize the following text: From its origin in 1956, the Disability Insurance (DI) program has provided compensation for the reduced earnings of individuals who, having worked long enough and recently enough to become insured, have lost their ability to work due to a severe, long-term disability. The program is administered by SSA and is funded through payroll deductions paid into a trust fund by employers and workers. In addition to cash assistance, DI beneficiaries receive Medicare coverage after they have received cash benefits for 24 months. In 2000, about 5 million disabled workers received DI cash benefits totaling about $50 billion, with average monthly cash benefits amounting to $787 per person. To qualify for benefits, an individual must have a medically determinable physical or mental impairment that (1) has lasted or is expected to last at least 1 year or result in death and (2) prevents an individual from engaging in substantial gainful activity. Individuals are considered to be engaged in substantial gainful activity if they have countable earnings at or above a certain dollar level. In addition to determining initial eligibility, the SGA standard also applies to the determination of continuing eligibility for benefits. Beyond a 9-month trial work period and an additional 3-month grace period during which beneficiaries are allowed to have any level of earnings without losing benefits, benefit payments are terminated once SSA determines that a beneficiary’s countable earnings exceed the SGA level. DI benefits are also terminated when a beneficiary (1) dies, (2) reaches age 65, upon which DI benefits are automatically converted to Social Security retirement benefits, or (3) medically improves, as determined by SSA through periodic continuing disability reviews. Under the Social Security Act, the Commissioner of Social Security has the authority to set the SGA level for individuals who have disabilities other than blindness. SSA has increased the SGA several times over the past decade, to $500 per month in 1990 and to $700 per month in July 1999. In December 2000, SSA finalized a rule calling for the annual indexing of the nonblind SGA level to the average wage index (AWI) and recently increased the level to $780 on the basis of this indexing. The SGA level for individuals who are blind is set by statute and indexed to the AWI.Currently, the SGA for blind individuals is $1,300 of countable earnings. Despite considerable disagreement and uncertainty among researchers, policy makers, and disability advocates over the employment effects of the SGA on DI beneficiaries, there is a theoretical basis for believing that the SGA acts as a work disincentive. That is, to maximize income, maintain health insurance coverage, or achieve a desirable labor-leisure tradeoff, beneficiaries may be inclined to limit their work effort to remain eligible for program benefits. This economic rationale is supported by anecdotal evidence from some beneficiaries who have reported that, although they would prefer to work or have greater earnings, they are fearful of doing so because of the severe financial consequences of exceeding the SGA— losing cash benefits and, eventually, Medicare benefits. In addition, some workers with disabilities whose current earnings are above the SGA level, making them ineligible for the DI program, may reduce their earnings to become eligible for DI benefits. Other researchers and policy makers believe that although the SGA level may serve as a work disincentive for some beneficiaries, this disincentive effect is likely to be very limited for several reasons. First, because severe long-term disability is a central criterion for DI eligibility, many DI beneficiaries may be unable to perform any substantial work. Even if they are willing and able to work, beneficiaries may face employment barriers, such as high costs for supportive services and equipment or discrimination. In addition, we reported previously that many beneficiaries are unaware of DI program provisions affecting work, and several researchers we spoke with said that some beneficiaries may not even know how much they are allowed to earn. In terms of the SGA’s effect on those not currently on the DI rolls, disability advocates have stated that workers turn to the DI program only as a last resort and are not inclined to reduce income for the sole purpose of qualifying for benefits. Also, some studies indicate that the difficulty of qualifying for DI benefits–having to limit or cease work for at least 5 months before receiving benefits and undergoing a stringent review to certify one’s condition as severely disabled–may itself be a factor discouraging workers with disabilities from applying for these benefits. Few empirical studies have examined the effects of the SGA on the work patterns of disabled beneficiaries and nonbeneficiaries. Two studies conducted in the late 1970s by SSA researchers found that the SGA level does not have a substantial effect on the work behavior of beneficiaries.These studies examined past increases in the SGA level to assess whether these increases led to greater labor force participation on the part of DI beneficiaries. Neither study identified any clear change in beneficiary earnings as the SGA level increased. However, a study conducted by the Office of Inspector General (OIG) at the Department of Health and Human Services (HHS) found that some beneficiaries who had completed a trial work period subsequently reduced their earnings below the SGA level so they could continue to receive DI benefits. Out of the 100 cases sampled, 18 beneficiaries who were capable of working had quit work or reduced their earnings to maintain DI benefits. In addition, an internal study conducted by SSA researchers examined how the earnings patterns of DI beneficiaries age 55 or older changed after they converted to retirement benefits at age 65. This study found that beneficiaries were more likely to return to work after converting to retirement benefits, which were subject to a more generous earnings limit. This evidence suggests that the SGA standard leads some beneficiaries to work less than they could. Despite the difficulties inherent in comparisons of different programs, studies of earnings limits in other programs may also provide some insights on the effect of the SGA. For example, studies of the retirement earnings test indicate that this limit probably caused some retirees to restrain their earnings in order to avoid having their benefits reduced. However, this “parking” effect appeared to be limited to only a relatively small proportion of the retiree population. For example, one study found that only about 2 percent of insured workers aged 65-69 had earnings at or near the retirement earnings limit. A study of the Supplemental Security Income (SSI) program’s 1619(b) provision also indicates that an earnings limit can result in beneficiaries limiting their work effort. As the 1619(b) earnings threshold was increased, some SSI beneficiaries increased their earnings in line with this threshold, which is consistent with the idea that beneficiaries restrain earnings in order to maintain program (in this case, Medicaid) eligibility. However, this “parking” behavior was limited to only those beneficiaries who had significant earnings—a group comprising about 2 percent of all adult, disabled SSI beneficiaries. Our analysis of SSA data indicates that the work patterns of most DI beneficiaries are unlikely to be affected by the SGA level. For example, from 1985 through 1997, on average, about 7.4 percent of DI beneficiaries who worked had annual earnings between 75 and 100 percent of the SGA level. These beneficiaries comprised only about 1 percent of the total DI caseload. This proportion of beneficiaries with earnings in this range of the SGA remained relatively small even though the number and proportion of DI beneficiaries who work rose dramatically during this period, increasing by almost 80 percent. Although almost one-fourth of working beneficiaries had earnings above the SGA level, most had very low earnings, well below the annualized SGA level. Even among those beneficiaries with earnings near the SGA level in a given year, most experience an eventual reduction in earnings in subsequent years. Nevertheless, some beneficiaries may change their work effort in response to the SGA level. For example, we found that about 13 percent of working beneficiaries who had earnings between 75 and 100 percent of the annualized SGA level in 1985 still had earnings near the SGA level in 1995, even though the SGA had increased from $300 to $500 a month during this period. In addition, about 7 percent of beneficiaries who did not have any earnings in the years immediately preceding their retirement earned income in the one or more years following retirement, when the SGA earnings limit no longer applied. However, while these findings are suggestive of a possible effect on work effort, our analysis could not definitively link beneficiary work patterns to the SGA level due in part to various limitations in SSA data, such as the lack of monthly earnings data. From 1985 through 1997, on average, about 7.4 percent of DI beneficiaries who worked –comprising about 1 percent of the total DI caseload – had annual earnings between 75 and 100 percent of the SGA level (see table 1). On an annual basis, the number of beneficiaries with incomes clustering at or just below the SGA level increased almost fourfold in absolute terms from 15,800 in 1985 to almost 60,000 in 1997. However, the annual percentage of working beneficiaries with earnings between 75 and 100 percent of the SGA level fluctuated from 8.5 percent in 1988 to 5.1 percent in 1990 to 8.9 percent in 1997. The proportion of beneficiaries with earnings at or just below the SGA level remained small even though the proportion of DI beneficiaries who worked rose dramatically, increasing by almost 80 percent between 1985 and 1997 (see table 2). The number of beneficiaries who worked increased from about 220,000 in 1985 to over 675,000 in 1997 and increased as a percent of all DI beneficiaries in every year, including during the 1990- 91 recession. Throughout the period, most working DI beneficiaries had very low earnings. For example, in 1995, the median annual earnings of working beneficiaries were about $2,157 and the majority of working beneficiaries—about 58 percent—earned no more than 50 percent of the annualized SGA level. Although median earnings of working DI beneficiaries were about 15 percent higher in 1997 than they had been in 1985, they remained well below the annualized SGA level. While mean earnings for this group fluctuated between a high of $5,851 in 1985 and a low of $4,697 in 1993, figure 1 indicates that even with the 67 percent increase in the SGA level in 1990, the earnings distribution of DI beneficiaries did not change considerably from 1985 to 1997. We also examined beneficiaries who had earnings above the SGA level to see if, over time, they tended to reduce their earnings to an amount less than but close to the SGA level in order to maintain eligibility for DI benefits. We found that the majority of beneficiaries in 1985 who had earnings exceeding the SGA level eventually experienced a reduction to no earnings or to an amount less than 75 percent of the SGA (see table 3). By 1989, 48 percent of these individuals had no earnings and only 2 percent had earnings between 75 to 100 percent of the annualized SGA level. This indicates that most beneficiaries who at some point have earnings above the SGA level do not subsequently engage in “parking” to remain on the DI rolls. Nevertheless, the large shift that we observed from earnings above the SGA to no or very low earnings does suggest decreasing ability or motivation to work. However, as late as 1997, about 32 percent of these beneficiaries had earnings exceeding the SGA level, indicating that some beneficiaries maintain their ability to achieve relatively substantial earnings. It is unclear why these individuals are able to consistently earn above the SGA level while retaining eligibility for DI benefits. Although beneficiaries in a trial work period or an extended period of eligibility may have earnings that exceed the SGA level, these work incentive periods are time-limited. Only beneficiaries who are blind are permitted, on a continuing basis, to earn above the SGA level that applies to nonblind individuals. However, we could not determine the status of individuals who had earnings exceeding the SGA level because SSA’s principal program data do not reliably identify whether a beneficiary is in a trial work period or extended period of eligibility and do not contain an indicator denoting whether a beneficiary is blind. Among beneficiaries who have earnings at or near, but not exceeding, the SGA level in a given year, most experience a reduction in earnings in subsequent years. For example, of beneficiaries in 1985 who earned between 75 to 100 percent of the annualized SGA level, 47 percent had no earnings by 1989, while the earnings of another 26 percent had fallen to between 1 and 74 percent of the annualized SGA level (see table 4). Nevertheless, about 11 percent of these beneficiaries still had earnings in 1989 between 75 to 100 percent of the annualized SGA level, suggesting that at least some beneficiaries may be attempting to stay close to the SGA without exceeding it. Even after the SGA level was increased in 1990, a small proportion of these beneficiaries continued to have earnings between 75 to 100 percent of the new annualized SGA level. For example, in 1995 about 13 percent of beneficiaries who had earnings between 75 to 100 percent of the annualized SGA level in 1985 still had earnings within this range of the higher annualized SGA level. Our review of the earnings of former DI beneficiaries who were converted to retirement benefits at age 65 also indicates that the work patterns of only a small proportion of beneficiaries are affected by the SGA. For example, we looked at DI beneficiaries who converted to retirement benefits at age 65 between 1987 and 1993. Of those in this group who had no earnings in the 3 years preceding retirement, about 7 percent did have earnings in 1 or more years following retirement (between ages 66 – 68) when the SGA earnings limit no longer applied. While small, the proportion of beneficiaries returning to work after retirement is greater than the proportion of older beneficiaries who return to work while still on the DI rolls. For example, we found that of beneficiaries who had no earnings at ages 55-57, about 3 percent had earnings at ages 58-60. These data suggest that, at least for a limited number of beneficiaries, the SGA may serve as a disincentive to work. For each analysis, the absence of key data elements made it difficult for us to determine the effects of the SGA level. For example, because SSA collects annual rather than monthly earnings data, we could not observe earnings relative to the SGA level on a monthly basis. However, many workers with disabilities may engage in only intermittent work throughout the year. The annual earnings data did not allow us to observe those individuals who only work several months out of the year and, in order to ensure receipt of benefits, “park” at the SGA level in those months. Another data limitation is the difficulty in identifying whether a DI beneficiary is in a trial work period. Without reliable information on the trial work period status of beneficiaries, we could not determine the full range of work incentives and disincentives potentially affecting the earnings of DI beneficiaries. In addition, neither the CWHS nor SSA’s principal administrative file for the DI program (the Master Beneficiary Record) contain data that identify whether a beneficiary is blind. Such a distinction is important to analyses relating to the SGA because blind beneficiaries are subject to a higher SGA limit than nonblind beneficiaries are. Distinguishing blind and nonblind beneficiaries may help explain why a substantial proportion of beneficiaries continue to earn above the nonblind SGA level while retaining DI eligibility. Data and methodological limitations make it difficult to ascertain the effect of the SGA on DI program entry and exit rates. After 1990, the rate of program entry initially increased and then gradually declined. Although some researchers and policy makers believe that an increase in the SGA could encourage more people who are capable of working to enter the rolls, our analysis indicates that most new entrants were either not able or not inclined to increase their earnings or work at all. However, because of data limitations and the wide range of other possible factors affecting program entry, the link between the increase in the SGA level and these trends in entry is unclear. The analysis of program exits indicated that although the number of beneficiaries exiting the program rose over the 7 years after the 1990 increase in the SGA level, the annual rate of exit generally declined. While beneficiary deaths and conversions to retirement benefits accounted for most program exits, the percentage of exits caused by medical improvement or a return to work increased gradually, from 1.9 percent in 1985 to 9.2 percent in 1996, and then rose sharply to 19.9 percent in 1997. However, the aggregation of medical improvement and return-to-work data prevent us from obtaining a full understanding of the link between the SGA and DI program exit behavior. Our analysis showed that the rate of program entry varied between 1990 and 1997, reaching a high of 19.3 percent in 1991 and then gradually declining, except for a slight upward movement in 1996, to a low of 10.3 percent in 1997 (see figure 2). In 1990, there was a discernible jump in the rate of program entry, which continued into 1991. The 1990 and 1991 rates were higher than the rates in any of the pre-1990 years we analyzed. The 1990 increase in the SGA level could have encouraged additional program entry to the extent that individuals with disabilities whose earnings were between the pre-1990 SGA level and the 1990 SGA level could then qualify for benefits. Also, some individuals could have reduced their earnings in order to qualify for DI benefits and then increased their earnings once they became eligible. However, the data we examined indicate that most DI beneficiaries who entered the program between 1990 and 1995 were either not able or not inclined to increase their earnings or work at all after receiving benefits. Relatively few of these new DI beneficiaries—between 2 to 5 percent—increased their earnings above the SGA level within the first 3 years after their initial year in the program and most new beneficiaries had no earnings during these first several years on the rolls. There are a number of factors other than the increase in the SGA level that likely affected the post-1990 DI program entry rates. For example, given that entry rates began to increase in 1988, prior to the 1990 SGA increase, the growth in program entry in 1990 and 1991 may simply represent a continuation of this earlier trend. In our prior work, we described several program factors, such as changes in the criteria for evaluating mental impairment disabilities, that appear to have contributed to this trend. In addition, a general labor force response to the 1990-91 recession might also explain the increase in entry. The recession could have resulted in layoffs of individuals with disabilities, as well as other workers. In response, some of these individuals might have sought entry to the DI program, rather than continuing a job search, even though they were previously able to work and earn above the SGA level. From the data, we cannot differentiate the reason for entry by a beneficiary, and so have no way of determining whether the increase in entry was related to the increase in the SGA level or some other factor. Likewise, the ensuing economic expansion may have helped to ensure continuing work and significant earnings for some disabled workers, thereby reducing the number of workers seeking and receiving DI benefits. In addition, advances in medicine and medical care, along with advances in and increased use of assistive devices and equipment (for example, adapted computers/keyboards), may have allowed some disabled workers to remain gainfully employed. Our analysis of DI program exits indicated that the yearly rate of exit generally declined over the 1990 to 1997 period even though the number of beneficiaries exiting the program was increasing (see figure 2). Program exit is largely driven by beneficiaries’ death or their conversion to retirement benefits, which together account for about 95 percent of aggregate program exits between 1985 and 1997 (see table 5). While medical improvement or return to work gradually increased from 2 to 9 percent of all exits between 1985 and 1996, there was a dramatic increase in the percentage of DI beneficiaries exiting the program in 1997 for these reasons. It is unclear what effect, if any, the SGA may have had on these program exits because, although the data indicate whether the beneficiary reached retirement age or died, they do not indicate whether the beneficiary returned to work or whether a continuing disability review determined that they had medically improved. The large increase in the percentage of beneficiaries returning to work or medically improving for 1997 may be related, in part, to an increase in the number of continuing disability reviews that occurred during 1997. However, a strong economy that drew more DI beneficiaries into the labor force or other factors also may have played a role. Our analysis of DI beneficiary earnings from the mid-1980s to the mid- 1990s suggests that the SGA level may act as a work disincentive for only a small proportion of DI beneficiaries. This is generally consistent with studies of the SGA and of earnings limits in related programs, which indicate that such limits, at most, affect a relatively small proportion of beneficiaries. However, the limitations in the available data mean that our findings should be accepted with caution. The lack of data on monthly earnings; on beneficiaries who are blind or are in a trial work period; and on beneficiaries who return to work, to name only a few areas, all hampered our efforts to arrive at more definitive conclusions. In particular, the lack of data identifying whether a beneficiary is blind precluded us from analyzing the effect of different SGA levels on blind and nonblind DI beneficiaries. We place significance on our finding that the SGA’s effect remained small even as increasing numbers of DI beneficiaries entered the labor force. While the DI program had grown by almost 72 percent from 1985 to 1997, the number of employed DI beneficiaries more than tripled. The number of working DI beneficiaries increased every year, even during the recession of the early 1990s. Yet it is unclear what has been driving this increase in employment. Given that most of these new workers have earnings far below the SGA level and remain at those low levels for many years afterwards, it is unlikely that this increase was caused by an increase in the SGA level. Other possible explanations include a buoyant economy throughout most of this period since 1985, enhanced employment protections for the disabled, increased availability of assistive technology, and a greater acceptance of hiring workers with disabilities by society in general. While this development has important implications for the DI program, the lack of data again makes it difficult for program officials, researchers, and policy makers to gain a better understanding of this phenomenon and reconfigure the DI program’s return-to-work incentives to reinforce this trend. The DI program, program beneficiaries, policy makers, and the general public could all greatly benefit from the collection of data that would facilitate a more comprehensive analysis of critical employment and program policy issues. Therefore, we recommend that the Commissioner of SSA take action to identify the full range of data necessary to assess the effects of the SGA on DI program beneficiaries, develop a strategy for reliably collecting these data, and implement this strategy in a timely manner, balancing the importance of collecting such data with considerations of cost, beneficiary privacy, and effects on program operations. In our study, we noted several key data elements that would be needed for a comprehensive assessment of the effects of the SGA level on program beneficiaries. These include data that identify the monthly earnings of beneficiaries and whether a beneficiary is blind, is participating in a trial work period, or has exited the DI program based on a return to work. Some of these data, such as information identifying whether a beneficiary is blind or is participating in a trial work period, is already collected by SSA but is not reliably recorded and maintained in SSA’s principal DI program data base. Other information, such as monthly earnings data, may be difficult to collect and involve data issues that extend beyond the DI program. There may also be additional information, beyond the data elements we discussed, that SSA may consider necessary for assessing the effects of the SGA. In commenting on a draft of this report, SSA agreed with our recommendation. The agency, while acknowledging that it currently does not have the capability in place to track the employment and earnings patterns of DI beneficiaries, noted that it has made a commitment to collecting and analyzing DI beneficiary data. SSA stated that it is currently reaffirming that commitment and is developing a strategy to improve its efforts to collect such data. (SSA’s comments appear in app. II.) We believe that SSA’s stated commitment to developing improved data on DI beneficiaries’ earnings and employment represents a positive development. Such a commitment should include the development and implementation of a comprehensive strategy that would collect the data required for assessing the earnings and employment of all DI beneficiaries rather than just a subset, such as those who participate in particular programs initiated under the Ticket to Work Act. This strategy should also include additional data elements that would provide insight into our understanding of DI beneficiaries’ employment, such as data identifying beneficiaries who are blind or who are participating in a trial work period. SSA also provided some technical comments. The agency noted that although our report acknowledges various data limitations that affected our analysis, including limitations in SSA’s earnings data, we did not sufficiently emphasize the extent to which these earnings data might include income that is not related to current employment. In addition, SSA stated that our data on reasons for exit, or termination, from the DI program varied from those published by SSA’s Office of the Chief Actuary. Finally, SSA questioned our analysis of beneficiaries whose earnings consistently exceed the SGA level. With regard to our discussion of limitations in the earnings data, we agree with SSA that these limitations are considerable and have noted that throughout the report. In particular, SSA highlighted the potential for SSA earnings records to include income that may not be related to current work. It is unclear whether a substantial portion of the earnings data we analyzed was unrelated to current work. For example, an SSA studystated that the agency’s earnings data may include “certain payments from profit sharing plans.” However, the study also noted that few beneficiaries had actually participated in such plans. In addition, although this study indicated a sizeable discrepancy between SSA earnings data and earnings reported by some beneficiaries in a survey interview, it was unclear whether this discrepancy was due to limitations in SSA data or to limitations inherent in self-reported data. Regarding the differences between our data on the reasons for program exit, or termination, and the data reported by SSA, we acknowledge in the report that SSA data indicate somewhat higher exit rates due to reasons other than death and conversion to retirement benefits. We believe that these differences are likely attributable to the use of different sources of data on program exit. We used the CWHS because it was the most appropriate data set for conducting a longitudinal analysis of beneficiaries’ earnings in relation to the SGA level. Further, although the termination rates we report do differ from SSA’s data, the trends portrayed in our data on exits are, in fact, generally consistent with those indicated in the SSA data. For example, where SSA’s data indicate a 10.5 percentage point increase in program exit due to medical recovery or return-to-work from 1996 to 1997 (from 12.3 percent to 22.9 percent), GAO’s data similarly indicate a 10.7 percentage point increase (from 9.2 percent to 19.9 percent). Given that our discussion of program exits focuses primarily on trends rather than absolute numbers, we believe that our data adequately support our finding. Finally, regarding the issue of some beneficiaries being able to consistently earn above the SGA level, we identified in the report several reasons why some beneficiaries might do so. For example, such beneficiaries may be blind and thus subject to a higher SGA level than nonblind beneficiaries. We also note that without better DI program data, including data identifying whether a beneficiary is blind or in a trial work period, we could not provide a more definitive explanation of this phenomenon. Examination of individual case folders to determine why beneficiaries continued to earn above the SGA level—an approach suggested by SSA--was not a viable option for us on this study given our resources and timeframes for completing the study. SSA also made a few other technical comments, which we incorporated where appropriate. We are sending copies of this report to the Honorable Jo Anne B. Barnhart, Commissioner of Social Security; appropriate congressional committees; and other interested parties. We will make copies available to others on request. This report is also available on GAO’s home page at http://www.gao.gov. If you or your staff have any questions concerning this report, please call me at (202) 512-7215 or Charles A. Jeszeck at (202) 512-7036. Other individuals making key contributions to this report include Mark Trapani, Michael J. Collins, and Ann Horvath-Rose. To conduct our work, we analyzed data from the Social Security Administration’s (SSA) Continuous Work History Sample (CWHS). The CWHS consists of records representing a longitudinal 1 percent sample of all active Social Security accounts. It is designed to provide data on earnings and employment for the purpose of studying the lifetime working patterns of individuals. The data, drawn from SSA administrative data sets, contain information on an individual’s Disability Insurance (DI) eligibility, earnings, and demographic characteristics. We did not independently verify the accuracy of the CWHS data because they were commonly used by researchers in the past and they are derived from a common source of DI program information. From the total sample of 2,955,942 individuals, we selected a subsample of 92,662 individuals who were eligible for DI benefits at some point between 1984 and 1998. To obtain this sample, we excluded individuals whose Social Security record indicated a gap in DI entitlement, DI beneficiary status beginning before age 18 or continuing past age 64, a date of death before their DI beneficiary status, and those not identified as the primary beneficiary. We could not determine the exact date of eligibility because the CWHS only provides eligibility status as of December 31 of each year. Therefore, individuals were included in our analysis only as of their second year of DI eligibility to assure that the earnings we observed occurred only while an individual was in beneficiary status. In addition to our main sample, we selected another subsample of 9,990 DI beneficiaries who reached age 65 during the 1987 to 1993 time period for the purpose of analyzing DI beneficiaries who were converted to retirement benefits. All samples are subject to sampling error, which is the extent to which the sample results differ from what would have been obtained if the whole universe had been observed. Measures of sampling error are defined by two elements—the width of the confidence interval around the estimate (sometimes called precision of the estimate) and the confidence level at which the interval is computed. The confidence interval refers to the fact that estimates actually encompass a range of possible values, not just a single point. This interval is often expressed as a point estimate, plus or minus some value (the precision level). For example, a point estimate of 75 percent plus or minus 5 percentage points means that the true population value is estimated to lie between 70 percent and 80 percent, at some specified level of confidence. The confidence level of the estimate is a measure of the certainty that the true value lies within the range of the confidence interval. We calculated the sampling error for each statistical estimate in this report at the 95- percent confidence level. All percentage estimates from the sample have sampling errors (95 percent confidence intervals) of plus or minus 10 percentage points or less, unless otherwise noted. All numerical estimates other than percentages have sampling errors of 10 percent or less of the value of those numerical estimates, unless otherwise noted. To analyze the effects of the SGA on the earnings of DI beneficiaries, we attempted to determine whether DI beneficiaries engage in “parking,” that is, whether they limit their earnings to a level at or just below the SGA limit in order to maintain eligibility for benefits. If beneficiaries do indeed park, then we would expect to find a clustering of earnings just below the SGA level. The occurrence of such clustering would provide a fairly strong indication that beneficiaries are limiting their employment and earnings to stay in the DI program, thereby reducing program exit. In addition, to the extent that beneficiaries park or otherwise limit their earnings due to a work disincentive effect of the SGA, we would expect an increase in the SGA level to result in a corresponding increase in beneficiaries’ earnings. To determine if earnings clustered around the SGA level, we examined the distribution of earnings both before and after the 1990 increase in the SGA level to see what proportion of beneficiaries had annual earnings at or within 5 percent, 10 percent, and 25 percent of the annualized SGA level. We also tracked those beneficiaries who had earnings near the annualized SGA level in a given year to see if they maintained this level of earnings in subsequent years. In addition, we tracked those beneficiaries who were on the rolls and had no earnings or had earnings below the annualized SGA level prior to 1990 to see if they increased their earnings and clustered around the new annualized SGA level. Finally, we examined beneficiaries who, in a given year, had earnings above the annualized SGA level to see if, over time, they tended to reduce their earnings to an amount near, but below, the SGA to maintain program eligibility. To further analyze whether DI beneficiaries limit their earnings due to the SGA, we observed how these individuals behave once they are no longer subject to the SGA level. We did this by looking at the earnings of DI beneficiaries who reached age 65 and were converted to the Old Age and Survivors Insurance (OASI) program. Once DI beneficiaries reach age 65, they are converted to retired worker status and their benefits are paid from the OASI trust fund. Likewise, they are no longer subject to the SGA limit. If beneficiaries are limiting their earnings due to the SGA, then we would expect them to increase their earnings after retirement at age 65. Therefore, a finding that a significant proportion of former DI beneficiaries return to work or increase earnings after conversion would serve as some evidence for the work disincentive effect of the SGA. For DI beneficiaries who had entered the DI rolls prior to age 62, remained on the rolls until being converted to retirement benefits at age 65, and survived to age 68, we examined their earnings between ages 66 – 68 to determine whether there was an increase in earnings and employment after they left the DI program. To examine the effects of the SGA on DI program entry and exit rates, we looked at the rate of entry and exit both before and after the increase in the SGA. If people respond to the change in the SGA then we might expect the rate of entry to increase after the increase in the SGA level. With the higher SGA level, some individuals with disabilities would now qualify for benefits if their earnings are between the old and new SGA level. Likewise, some individuals with earnings just above the new SGA level may reduce their earnings in order to qualify and then increase their earnings after they become eligible. Therefore, we examined the earnings, through 1997, of new beneficiaries who entered the DI program between 1990 and 1995 to see if they tended to increase their earnings after becoming eligible for benefits. In terms of program exit, we might expect exit rates to decrease after an increase in the SGA level since many working beneficiaries may now be further from the new level and some may even increase their earnings to an amount near the new level (but higher than the old level) without having their benefits terminated. We examined data indicating the reasons that beneficiaries’ exit DI to determine the extent to which program exits resulted from beneficiaries returning to work or medically improving versus retirements or deaths. The absence of key data in the CWHS and in other SSA data sets limited our ability to draw clear conclusions from our analysis. For example, while the SGA is a monthly level, the available earnings data are recorded only on a yearly basis. Therefore, we were not able to analyze DI beneficiaries’ monthly earnings in relation to the actual, monthly SGA limit. Instead, we examined beneficiary earnings in terms of the annualized SGA level; that is, we multiplied the monthly SGA amount by 12 to permit comparison of the monthly limit to the annual data. (For example, the SGA level in 1995 was $500 per month, so the annualized SGA level was $500 multiplied by 12, or $6,000.) As a result, we were not able to identify parking that might have occurred among beneficiaries who, for example, worked for only a few months during the year but limited their earnings to a level near, but not exceeding, the SGA level in each of those months.Nevertheless, our analysis did allow us to identify individuals who consistently have earnings at or near the SGA level. To the extent that beneficiaries are trying to maximize their income–that is, earn as much as they can within a given year while maintaining DI eligibility–there may be a significant number of beneficiaries who have sustained earnings up to the SGA level through much of the year. Another data limitation concerned beneficiaries who are in a trial work period. The trial work period allows beneficiaries to test their ability to work without penalty. Therefore, beneficiaries can earn any amount without being subject to the SGA limit. Neither the CWHS nor other SSA data sets provide a reliable means for identifying beneficiaries in a trial work period. As a result, in our parking analysis, we were not able to distinguish the earnings of beneficiaries who are subject to the SGA limit from those who are not subject to this limit. Although the trial work period allows beneficiaries to earn any amount, there is no reason to believe that all beneficiaries in a trial work period will have earnings greater than the SGA level. An individual’s disability may limit his/her earnings to well below the SGA level. However, we do not believe that this limitation affected our analysis to a great extent because it is unlikely that the earnings of beneficiaries in a trial work period would systematically fall at or near the SGA level and thereby skew our analysis. The identification of blind and nonblind beneficiaries also created a limitation in our analysis. The CWHS does not allow us to distinguish between blind and nonblind DI beneficiaries, which is important since blind beneficiaries are subject to a higher SGA limit. Some of the beneficiaries that we observe earning above the nonblind SGA limit may actually be blind individuals. In addition, if a substantial number of blind beneficiaries had earnings just below the nonblind SGA level, then our analysis could exaggerate the existence of parking. However, this limitation is not likely to have substantially impacted our analysis of parking among nonblind beneficiaries because blind individuals represent only about 2 percent of the DI caseload and therefore probably comprised a very small portion of our sample. Perhaps more importantly, the inability to identify blind beneficiaries means that we could not assess the extent to which they exhibit parking behavior. As a result, our analysis may be understating the extent of parking in the DI program. Finally, the lack of data on impairment-related work expenses (IRWE) also limited our ability to analyze the effects of the SGA level on employment. SSA deducts the cost of certain impairment-related expenses needed for work from earnings when making SGA determinations. The inability to identify IRWE could exaggerate the effect of the SGA on earnings since some beneficiaries near or above the SGA level may not have been at this level once IRWE was subtracted from their earnings. However, the inability to determine IRWE is not likely to have significantly impacted our analysis because SSA officials told us that IRWE was applied in only a very limited number of cases during the years of our analysis Despite these substantial limitations, the CWHS is the best available data set for identifying the basic program information needed to conduct our analysis within acceptable timeframes. The principal alternative data set within SSA—the Master Beneficiary Record—does not lend itself to easy analysis because it is designed to fulfill SSA’s administrative objectives. In particular, we did not choose to use this data set because it would not have provided the longitudinal data that we needed unless it was linked with other SSA administrative files containing DI program information. Linking these complex files would have raised many uncertainties regarding the ultimate quality of the data and would have added substantial time and complexity to our analysis. In addition, non-SSA data sets, such as the Census Bureau’s Current Population Survey, could not serve our needs because, among other limitations, we would not be able to adequately identify DI program participation for most of the years of our analysis. In addition to data limitations, our analysis was also constrained by the lack of any quantitative evaluation of other possible factors affecting the earnings of DI beneficiaries and disabled workers. For example, our analysis does not control for other factors in the economy such as recessions, implementation of the Americans With Disabilities Act (ADA), advances in medicine and medical care, and advances in and increased use of assistive devices and equipment. A recession may increase entry into the DI program, but implementation of the ADA and improvements in medical care and assistive devices and equipment may either decrease entry or increase exit. The inability to control for these factors limited our ability to make clear inferences from the data regarding the effects of the SGA.
The Social Security Administration's (SSA) Disability Insurance (DI) program paid $50 billion in cash benefits to more than five million disabled workers in 2000. Eligibility for DI benefits is based on whether a person with a severe physical or mental impairment has earnings that exceed the Substantial Gainful Activity (SGA) level. SSA terminates monthly cash benefit payments for beneficiaries who return to work and have earnings that exceed the SGA level--$1,300 per month for blind beneficiaries and $780 per month for all other beneficiaries. GAO found that the SGA level affects the work patterns of only a small proportion of DI beneficiaries. However, GAO also found that the SGA may affect the earnings of some beneficiaries. About 13 percent of those beneficiaries with earnings near the SGA level in 1985 still had earnings near the SGA level in 1995, even though the level was increased during that period. The absence of key information identifying the monthly earnings of beneficiaries, their trial work period status, and whether they are blind limited GAO's ability to definitively identify a relationship between SGA levels and beneficiaries' work patterns. Data limitations also make the effect of the SGA on DI program entry and exit rates difficult to isolate. Although the rate of program entry increased in the years immediately following a 1990 increase in the SGA level, it then gradually declined to a level below the pre-1990 entry rates. Since 1990, DI exit rates continue to be driven largely by beneficiary death and conversion to retirement benefits. However, the percentage of all exits caused by improvements in medical conditions or a return to work increased slowly, from 1.9 percent in 1985 to 9.2 percent in 1996, and then rose dramatically to 19.9 percent in 1997. A substantial increase in the number of continuing disability reviews done by SSA may account, in part, for this 1997 upturn, but data limitations preclude GAO from obtaining a full understanding of the link between the SGA and exit behavior.
You are an expert at summarizing long articles. Proceed to summarize the following text: Subject to the authority, direction, and control of the Secretary of Defense, each military service (Army, Navy, Marine Corps, and Air Force) has the responsibility to recruit and train a force to conduct military operations. In fiscal year 2006, DOD committed over $1.5 billion to its recruiting effort. Each service, in turn, has established a recruiting command responsible for that service’s recruiting mission and functions. The services’ recruiting commands are similarly organized, in general, to accomplish the recruiting mission. Figure 1 illustrates the organization of the recruiting commands from the senior headquarters level through the recruiting station where frontline recruiters work to contact prospective applicants and sell them on military service. Region (Western) Region (Eastern) Group 1 Group 2 Group 3 Group 4 (few sites) (several sites) (numerous sites) Each service has at least two levels of command between the senior headquarters and the recruiting station where frontline recruiters work to contact prospective applicants for military service. The Army Brigades, Navy and Marine Corps Regions, and Air Force Groups are subordinate commands of their service recruiting command and have responsibility for recruiting operations in large portions of the country. The Navy and Marine Corps organize their servicewide recruiting commands into Eastern and Western Regions that more or less divide responsibilities east and west of the Mississippi River. The Army, in comparison, has five Brigades and the Air Force has four Groups based regionally across the country that are responsible for their recruiting operations. These commands are further divided into local levels responsible for coordinating the frontline recruiting efforts. These 41 Army Battalions, 26 Navy and 6 Marine Corps Districts, and 28 Air Force Squadrons are generally organized around market demographics, including population density and geographic location. Finally, the 1,200 to 2,000 recruiting stations per service or in the case of the Marine Corps—the substations—represent that part of the recruiting organization with which the general public is most familiar. Of the approximately 22,000 total military recruiters in fiscal year 2006, almost 14,000 are frontline recruiters who are assigned a monthly recruiting goal. The recruiter’s monthly goal varies by service, but is generally 2 recruits per month. The remaining recruiters—roughly 8,000— hold supervisory and staff positions throughout the services’ recruiting commands. Table 1 provides a summary of the average number of recruiters by service for fiscal years 2002 through 2006 broken out by total number of recruiters and frontline recruiters who have a monthly recruiting goal. A typical frontline military recruiter is generally a midlevel enlisted noncommissioned officer in the rank of Army and Marine Corps Sergeant (E-5) or Staff Sergeant (E-6), Navy Petty Officer Second Class (E-5) or First Class (E-6), and Air Force Staff Sergeant (E-5) or Technical Sergeant (E-6), who is between the ages of 25 and 30 years old and has between 5 and 10 years of military service. While some frontline recruiters volunteer for recruiting as a career enhancement, others are selected from among those the services have identified as their best performers in their primary military specialties. All services have comprehensive selection processes in place and specific eligibility criteria for recruiting duty. For example, recruiters must meet service appearance standards, have a stable family situation, be able to speak without any impairment, and be financially responsible. The services screen all prospective recruiters by interviewing and conducting personality assessments and ensuring the prospective recruiters meet all criteria. To augment its uniformed recruiters, the Army also uses contract civilian recruiters, and has been doing so under legislative authority since fiscal year 2001. This pilot program, which authorizes the Army to use civilian contractors, will run through fiscal year 2007. The goal of the program is to test the effectiveness of civilian recruiters. If civilian recruiters prove effective, this would allow the Army to retain more noncommissioned officers in their primary military specialties within the warfighting force. Currently, the Army is using almost 370 contract civilian recruiters, representing approximately 3 percent of the Army’s total recruiting force. In general, training for frontline recruiters is similar in all services and has focused on ethics and salesmanship, with a growing emphasis placed on leadership and mentoring skills to attract today’s applicant. Each service conducts specialized training for approximately 6 weeks for noncommissioned officers assigned as recruiters. The number of hours of training time specifically devoted to ethics training as a component of the recruiter training curriculum ranges from 5 hours in the Navy to 34 hours of instruction in the Army. After recruiters successfully convince applicants on the benefits of joining the military, they complete a prescreening of the applicant, which includes an initial background review and a physical and moral assessment of the applicant’s eligibility for military service. After the recruiter’s prescreening, the military pays for the applicant to travel to 1 of 65 military entrance processing stations (MEPS) located throughout the country. At the processing stations, which are under the direction of DOD’s Military Entrance Processing Command, processing station staff administer the Armed Services Vocational Aptitude Battery, a test to determine whether the applicant is qualified for enlistment and a military job specialty, and conduct a medical examination to determine whether the applicant meets physical entrance standards. After the processing station staff determine that an applicant is qualified, the applicant signs an enlistment contract and is sworn into the service and enters the delayed entry program. When an applicant enters the delayed entry program, he or she becomes a member of the Individual Ready Reserve, in an unpaid status, until reporting for basic training. An individual may remain in the delayed entry program for 1 day up to 1 year. Just before reporting for basic training, the applicant returns to the processing station, undergoes a brief physical examination, and is sworn into the military. Figure 2, in general, illustrates the recruiting process from a recruiter’s initial contact with a prospective applicant to the applicant’s successful graduation from the service’s initial training school, commonly referred to as basic training. DOD and the services have limited visibility to determine the extent to which recruiter irregularities are occurring. The Office of the Under Secretary of Defense (OUSD) for Personnel and Readiness has the responsibility for overseeing the recruiting program. However, OUSD has not established a framework to conduct oversight of recruiter irregularities and provide guidance requiring the services to maintain data on recruiter wrongdoing. Although not required by OUSD to do so, the services require their recruiting commands to maintain data for 2 years; the Army Recruiting Command maintains data for 3 years and can retrieve case files back to fiscal year 1998. Furthermore, OUSD has not established criteria for the services to characterize recruiter irregularities or developed common terminology for irregularities. Accordingly, the services use different terminology, which makes it difficult to compare and analyze data across the services. Moreover, each of the services uses multiple systems for maintaining data that are not integrated and decentralized processes for identifying and tracking allegations and service-identified incidents of recruiter irregularities. Perhaps most significantly, none of the services accounts for all allegations or incidents of recruiter irregularities. Therefore, service data likely underestimate the true number of recruiter irregularities. Nevertheless, our analysis of service data suggests that most allegations are not substantiated. Effective federal managers continually assess and evaluate their programs to provide accountability and to assure that they are well designed and operated, appropriately updated to meet changing conditions, and achieving program objectives. Specifically, managers need to examine internal control to determine how well it is performing, how it may be improved, and the degree to which it helps identify and address major risks for fraud, waste, abuse, and mismanagement. According to the mission statement for the Office of the Under Secretary of Defense for Personnel and Readiness, its responsibilities include reviewing and evaluating plans and programs to ensure adherence to approved policies and standards, including DOD’s recruitment program. OUSD officials stated that they review service recruiter irregularity issues infrequently usually in response to a congressional inquiry, and they do not perform oversight of recruiter irregularities. OUSD has not issued guidance requiring the services to maintain data on recruiter irregularities. Nevertheless, the services require their recruiting commands to maintain data on recruiter irregularities for 2 years; the Army Recruiting Command maintains data for 3 years and can retrieve case files dating back to fiscal year 1998. Moreover, OUSD has not established or provided criteria to the services for how they should characterize various recruiter irregularities and has not developed common terminology because it responds to individual inquiries and, in general, uses the terminology of the service in question. Accordingly, the services use different terminology to refer to recruiter irregularities. How the services categorize the irregularity affects how they maintain data on recruiter irregularities. For example, the Army uses the term impropriety while the Navy, Marine Corps, and Air Force use the term malpractice to characterize the intentional enlistment of an unqualified applicant. Only the Army uses the term recruiter error to describe those irregularities not resulting from malicious intent or gross negligence. Consequently, if DOD were to require services to report on recruiter wrongdoing, the Army might not include its recruiter error category because these cases are not willful violations of recruiting policies and procedures and the Army does not identify such cases as substantiated or unsubstantiated in their data system. The Air Force uses the term procedural error to refer to an irregularity occurring as a result of an administrative error by the recruiter due to lack of knowledge or inattention to detail. If DOD were to require services to report on recruiter wrongdoing, the Air Force might not include its procedural error category because these cases are not intentional acts to facilitate the recruiting process for an ineligible applicant. In both cases, however, wasted taxpayer dollars result; unintentional recruiter errors can have the same effect as intentional recruiter irregularities because both result in inefficiencies in the recruiting process. DOD’s need for oversight may become more critical if the department decides to rely more heavily on civilian contract recruiters in the future. As we previously stated, the civilian recruiter pilot program currently authorizes the Army to use civilian recruiters, through fiscal year 2007, to test their effectiveness. Future reliance on civilian recruiters, in any service, would allow a service to retain more noncommissioned officers in their primary military specialties. However, OUSD would also need to be in a position to assure that this type of change is well designed and operated, and that its recruiting programs are appropriately updated to reflect a change in recruiting operations. None of the services can readily provide a comprehensive and consolidated report on recruiter irregularities within their own service because they use multiple systems that are not integrated. Currently, the services use systems that range from electronic databases to hard-copy paper files to track recruiter irregularities and do not have a central database dedicated to compiling, monitoring, and archiving information about recruiter irregularities. When we asked officials in each of the services for a comprehensive report of recruiter irregularities that occurred within their own service, they were unable to readily provide these data. Officials had to query and compile data from separate systems. For example, the Navy Recruiting Command had to access paper files for allegations of recruiter irregularities, while the Air Force Judge Advocate provided information from an electronic database from which we were able to extract cases specifically related to recruiter irregularities. Furthermore, the services cannot assure the reliability of their data because the services lack standardized procedures for recording data, their multiple systems use different formats for maintaining data, and in some instances the services do not conduct quality reviews or edit checks of the data. The services used the following systems to maintain data on recruiter irregularities at the time of our review: Army: The Army maintains three separate data systems that contain information about recruiter irregularities. The Army Recruiting Command’s Enlistment Standards Division has a database that houses recruiting irregularities that pertain to applicant eligibility. The Army Recruiting Command Inspector General maintains a separate database that houses other irregularities, including recruiter misconduct that may result in nonjudicial punishment. The Judge Advocate maintains hard- copy case files for recruiter irregularities that are criminal violations of the recruiting process that may result in judicial punishment. Navy: The Navy maintains four separate data systems that contain information about recruiter irregularities. The Naval Inspector General, the Navy Bureau of Personnel Inspector General, and the Navy Recruiting Command Inspector General all maintain some data on allegations of recruiter irregularities. The Naval Criminal Investigative Service investigates and maintains data on Navy criminal recruiting violations. Marine Corps: The Marine Corps Recruiting Command maintains two systems that track information on recruiting irregularities, one that captures reported allegations and another that only tracks the disposition of allegations and service-identified incidents that a commander or recruiting official at some level in the recruiting command structure determined to merit an inquiry or investigation. The Naval Criminal Investigative Service investigates and maintains data on Marine Corps criminal recruiting violations. Air Force: The Air Force maintains three separate databases with information about recruiter irregularities. The Air Force Recruiting Service Inspector General maintains a database that houses data on allegations of recruiter irregularities. The liaison from the Air Force Recruiting Service, located at the Air Force basic training site, maintains data within a separate electronic system on allegations of recruiter irregularities that applicants raise about their recruiters when they report to basic training. The Air Force Judge Advocate maintains a database containing criminal violations of recruiting practices and procedures. At the time of our review, Navy officials told us they believe there is value in having servicewide visibility over the recruiting process and they plan to improve their systems for maintaining data on recruiter irregularities. Navy officials stated that the Navy Bureau of Personnel Inspector General is working with the Navy Recruiting Command Inspector General and the Naval Education and Training Command to develop a system that maintains recruiting and training data that will include allegations and service-identified incidents of recruiter irregularities. Marine Corps officials told us they are in the process of improving their systems for maintaining data on recruiter irregularities by merging all data on allegations and service-identified incidents of recruiter irregularities into one database that can be accessed at all command levels of the Marine Corps Recruiting Command. An Air Force official told us that as a result of our review, the Air Force modified its system for capturing allegations and service-identified incidents surfacing at basic training by improving its ability to query the system for information on the type of allegation or incident and whether or not it was a substantiated case of recruiter wrongdoing. Where and how an irregularity is identified will often determine where and how it will be resolved. The services identify an allegation or incident of recruiter wrongdoing in a number of ways. These include input from service hotlines, internal inspections, congressional inquiries, and data collected by DOD’s Military Entrance Processing Command. The services’ recruiting command headquarters typically handle allegations and service- identified incidents of recruiter irregularities that surface through any of these means during the recruiting process. At other times, allegations surface in the recruiting process at command levels below the service recruiting command headquarters, and commanders at the Army Battalion, Navy and Marine Corps District, and Air Force Squadron level handle allegations that typically surface during supervisory reviews at the recruiting stations and substations. We were unable to determine the extent of these allegations, however, because the service recruiting commands do not maintain complete data. For example, Military Entrance Processing Command officials, responsible for assessing an applicant’s moral, mental, and physical eligibility for military service, stated that they forward all allegations and service-identified incidents of recruiter irregularities that surface during the screening process at the military entrance processing station to the services’ recruiting commanders. However, officials also stated that the services’ recruiting commanders do not provide feedback to them regarding the disposition of these cases. In fact, the services’ recruiting command headquarters data did not show records of allegations and service-identified incidents of recruiter irregularities received from the Military Entrance Processing Command. Additionally, each service provides applicants an opportunity to disclose any special circumstances relating to their enlistment process, including allegations of recruiter wrongdoing, when they enter basic training. Army and Air Force officials told us that they record all allegations of recruiter irregularities made by applicants at basic training. Army Recruiting Command officials stated that liaison officers at each of the basic training installations forward all allegations received from applicants to the Army Recruiting Command Enlisted Standards Division to record in its database. The Air Force implemented a new database in fiscal year 2005 specifically to record and resolve all allegations and service-identified incidents of recruiter wrongdoing that surface at basic training. The Navy and Marine Corps, on the other hand, do not record all allegations of recruiter irregularities made by applicants at basic training. Navy: The Navy gives applicants a final opportunity to disclose any irregularity that they believe occurred in their recruiting process when they arrive at basic training. The Recruiting Command Inspector General has the authority to investigate allegations or service-identified incidents of recruiter wrongdoing and uses its Navy Recruit Quality Assurance Team to conduct the final Navy recruiting quality assurance check before applicants begin basic training. In turn, the Assurance Team generates reports on allegations raised by applicants who claim they were misled during the recruiting process and submits its reports to the Navy Recruiting Command Inspector General. Navy recruiting command officials explained that the Inspector General investigates those allegations that the Assurance Team, based on the professional judgment and experience of its team members, recommends for further investigation. The Navy Recruiting Command Inspector General, however, does not maintain data on allegations that it does not investigate. The Assurance Team also sends its reports to the Navy Recruiting District Commanders who are responsible for overseeing the recruiters who appear on the reports. The District Commanders use the Assurance Team’s reports to monitor recruiter wrongdoing. Again, however, the District Commanders do not provide feedback to the Assurance Team as to how they resolve these allegations, nor do they report this information to the Navy Recruiting Command Inspector General unless they deem the case to merit further investigation or judicial processing. Moreover, the Assurance Team members do not record allegations of wrongdoing as a recruiter irregularity in those cases where they can easily resolve the discrepancy by granting an applicant an enlistment waiver to begin basic training. Assurance Team officials told us that they believe that some recruiters encourage applicants to conceal potentially disqualifying information until they arrive at basic training because the recruiters perceive that it is relatively easy to process a waiver at basic training. In addition, these same officials told us that this behavior saves recruiters the burden of collecting supporting documentation and expedites the time it takes a recruiter to sign a contract with an applicant and complete the recruiting process. Marine Corps: The Marine Corps also gives applicants a final opportunity to disclose any irregularity that they believe occurred in their recruiting process prior to beginning basic training. However, the Marine Corps’ Eastern and Western Recruiting Region staff use different criteria to handle allegations of recruiter irregularities that they cannot corroborate. Recruiting staff at the Eastern Region basic training site in Parris Island, South Carolina, enter all allegations applicants make against recruiters, while recruiting staff at the Western Region basic training site in San Diego, California, only enter those allegations that a third party can verify. A Marine Corps Recruiting Command official told us that, as a result of our review, Marine Corps officials discussed accounting procedures for allegations of recruiter irregularities at the command’s national operations conference held in May 2006. The official further stated that the Marine Corps Recruiting Command’s goal is to standardize procedures to account for all allegations of recruiter irregularities. Existing data suggest that substantiated cases of recruiter wrongdoing make up a small percent of all allegations and service-identified incidents, although, for reasons previously cited, we believe the service data likely underestimate the true number of recruiter irregularities. Substantiated cases of recruiter irregularities are those cases in which the services determined a recruiter violated recruiting policies or procedures based on a review of the facts of the case. (A more detailed discussion of the procedures that are in place to address substantiated cases of recruiter irregularity are discussed later in this report.) While the services cannot assure that they have a complete accounting of recruiter irregularities, the data that they reported to us are instructive in that they show the number of allegations, substantiated cases, and criminal violations increased overall from fiscal year 2004 to fiscal year 2005. At the same time, the number of accessions into the military decreased from just under 250,000 in fiscal year 2004 to about 215,000 in fiscal year 2005. Table 2 shows that, DOD-wide, the services substantiated about 10 percent of all allegations and service-identified incidents of recruiter irregularities. The services categorized cases as substantiated when the preponderance of the evidence supported the allegation of wrongdoing against a recruiter. Similarly, the services categorized cases as unsubstantiated when the preponderance of the evidence did not support the allegation against a recruiter. Table 3 shows the number of recruiter irregularities that were criminal violations of the recruiting process and addressed by the services’ Judge Advocate or criminal investigative service. The number of criminal violations in the recruiting process increased in fiscal year 2005; however, in both fiscal years, this number represented approximately 1 percent of all allegations and service-identified incidents of recruiter irregularities. The large increase in the number of Navy cases in fiscal year 2005 is likely a result of a special investigation where four cases led to nine additional cases of criminal wrongdoing. Table 4 shows that on average, the percentage of substantiated cases of recruiter wrongdoing compared to the number of actual accessions was under 1 percent in each service during the past 2 fiscal years. Table 5 shows that when we compared the number of substantiated cases of recruiter wrongdoing to the number of frontline recruiters, 4.7 percent of recruiters would have had a substantiated case against them in fiscal year 2005 if each recruiter who committed an irregularity had committed only one. (However, this is not to say that 4.7 percent of frontline recruiters committed an irregularity, given that some recruiters may have committed more than one irregularity). Without an oversight framework to provide complete and reliable data, DOD and the services are not in a position to gauge the extent of recruiter irregularities or when corrective action is needed, nor is the department in a sound position to give Congress and the general public assurance that recruiter irregularities are being addressed. A number of factors within the current recruiting environment may contribute to recruiting irregularities. Such factors include the economy, ongoing hostilities in Iraq, and fewer applicants who can meet military entrance standards. These factors, coupled with the typical difficulties of the job and pressure to meet monthly recruiting goals, challenge the recruiter and can lead to recruiter irregularities in the recruiting process. Data show that as the end of the monthly recruiting cycle draws near, the number of recruiter irregularities may increase. Among a number of factors that contribute to a challenging recruiting environment are the current economic situation and the ongoing hostilities in Iraq. Service recruiting officials told us that the state of the economy, specifically the low unemployment rate, has had the single largest effect recently on meeting recruiting goals. These officials stated DOD must compete harder for qualified talent to join the military when the economy is strong. According to U.S. Department of Labor, Bureau of Labor Statistics data, the national unemployment rate fell each year between 2003 (when it was at 6 percent) and 2005 (when it was 5.1 percent). In fiscal year 2005, three of the eight active and reserve components we reviewed—the Army, Army Reserve, and Navy Reserve—failed to meet their recruiting goals. Recruiters also believe that the ongoing hostilities in Iraq have made their job harder. Results of a DOD internal survey show that almost three- quarters of active duty recruiters agreed with the statement that current military operations made it hard for them to achieve recruiting goals and missions. Recruiters we interviewed expressed the same opinion. DOD has found that the public’s perceptions about military enlistment have changed because youth and their parents believe that deployment to a hostile environment is very likely for servicemembers with some types of military specialties. Officials further stated that adults who influence a prospective applicant’s decision about whether to join the military are increasingly fearful of the possibility of death or serious injury to the applicant. Recruiters also must overcome specific factors that routinely make their job hard. Recruiters told us that their work hours were dictated by the schedules of prospective high school applicants, which meant working most evenings and weekends. Almost three-quarters of active duty recruiters who responded to DOD’s survey stated that they worked more than 60 hours a week on recruiting or recruiting-related duties. Other factors that affect the recruiting environment include a recruiter’s location and access to eligible applicants. For example, service officials stated that it was easier to recruit in or near locations with a military presence. Recruiters also have difficulty finding eligible applicants. DOD researchers have estimated that over half of U.S. youth aged 16 to 21 are ineligible to join the military because they cannot meet DOD or service entry standards. DOD officials stated that the inability to meet medical and physical requirements accounts for much of the reason youth are ineligible for military service. Additionally, many youth are ineligible because they cannot meet service standards for education, as indicated by DOD’s preference for recruits with a high school diploma; mental aptitude, as indicated by receipt of an acceptable score on the armed forces vocational aptitude test; and moral character, as indicated by few or no criminal convictions or antisocial behavior. All of these factors contribute to a difficult recruiting environment that is challenging for recruiters to succeed. Pressure to meet monthly goals contributes to recruiter dissatisfaction. Over 50 percent of active duty military recruiters responding to the 2005 internal DOD survey stated that they were dissatisfied with their jobs. Approximately two-thirds of Army recruiters reported that they were dissatisfied with recruiting, while over a third of Air Force recruiters stated they were dissatisfied. The Navy and Marine Corps rates of recruiter dissatisfaction fell within these extremes, with just under half of Navy and Marine Corps recruiters reporting that they were dissatisfied with their jobs. When asked in this same survey if they would select another assignment if they had the freedom to do so, over three-quarters of active duty DOD recruiters said they would not remain in recruiting. On the one hand, the services expect recruiters to recruit fully qualified personnel; while on the other hand, the services primarily evaluate recruiters’ performance on the number of contracts they write, which corresponds to the number of applicants who enter the delayed entry program each month. In 2005, over two-thirds of those active duty recruiters responding to the internal DOD survey believed that their success in making their monthly quota for enlistment contracts had a make-or-break effect on their military career. Over 80 percent of Marine Corps recruiters held that opinion, as did almost two-thirds of Army and over half of Air Force recruiters. Navy officials stated that individual recruiters are not tasked with a monthly goal; rather, the goal belongs to the recruiting station as a whole. Still, approximately two-thirds of Navy recruiters responding to DOD’s survey indicated they felt their careers were affected by their success in making their individual recruiting goal. The recruiters who we interviewed also believed their careers were affected by how successful they were in achieving monthly recruiting goals. Recruiters, like all servicemembers, receive performance evaluations at least once a year. Our review of service performance evaluations and conversations with the services’ recruiting command officials show that Army, Navy, and Air Force recruiter evaluations are not directly linked to an applicant successfully completing his or her service’s basic training course. Instead, we found that the Army, Navy, and Air Force generally evaluate recruiters on their ability to achieve their monthly goal to write contracts to bring applicants into the delayed entry program. The Army’s civilian contractor recruiters, for example, receive approximately 75 percent of their monetary compensation for recruiting an applicant when that applicant enters the delayed entry program and the remaining 25 percent of their compensation when the applicant begins basic training. The Army’s contract, therefore, does not tie compensation to the applicant’s successful completion of basic training and joining the Army. Even though Navy officials told us that recruiters do not have individual goals because the monthly mission is assigned to the recruiting station, Navy performance metrics include data on the number of contracts written. However, the Navy does not hold recruiters directly accountable for attrition rates from either the delayed entry program or basic training. Marine Corps recruiters, unlike recruiters in the other services, are held accountable when an applicant does not complete basic training and remain responsible for recruiting an additional applicant to replace the former basic trainee. Marine Corps recruiter evaluation performance standards measure both the number of contracts written each month as well as attrition rates of applicants from the delayed entry program and basic training. Marine Corps Recruiting Command officials stated that they believe their practice of holding recruiters accountable for attrition rates helps to limit irregularities because recruiters are likely to perform more rigorous prescreening of applicants to ensure that a recruit is likely to complete Marine Corps basic training. In fact, Military Entrance Processing Command data show that Marine Corps recruiters have been the most consistently successful of all service recruiters at prescreening and processing applicants through their initial physical assessments, subsequently maintaining applicants’ physical eligibility while in the delayed entry program, and finally ensuring that applicants pass the final physical assessment and enter basic training. Table 6 shows the low medical disqualification rate of the Marine Corps in comparison with the other services. In addition to performance evaluations, the services provide awards to recruiters that are generally based on the number of contracts that a recruiter writes, rather than on the number of applicants that graduate from basic training and join the military. We reported in 1998 that only the Marine Corps and the Navy used recruits’ basic training graduation rates as key criteria when evaluating recruiters for awards. Recruiters in some services and other service recruiting command officials stated their belief that recruiters who write large numbers of contracts over and above their monthly quota are almost always rewarded. Such rewards can include medals and trophies for recruiter of the month, quarter, or year; preferential duty stations for their next assignment; incentives such as paid vacations; and meritorious promotion to the next rank. When unqualified applicants are recruited or when applicants who lack eligibility documentation are processed through the military entrance processing station in the effort to satisfy end-of-month recruiting cycle goals, wasted taxpayer dollars result. For example, the Army spends approximately $17,000 to recruit and process one applicant, and as much as $57,500 to recruit and train that applicant through basic training. We continue to believe our 1997 and 1998 recommendations to the Secretary of Defense have merit. Specifically, we recommended that the Secretary of Defense require all the services to review and revise their recruiter performance evaluation and award systems to strengthen incentives for recruiters to thoroughly prescreen applicants and to more closely link recruiting quotas to applicants’ successful completion of basic training. The department concurred with our recommendations in order to enhance recruiter success and help recruiters focus on DOD’s strategic retention goal, and it indicated that the Secretary of Defense would instruct the services to link recruiter awards more closely to recruits’ successful completion of basic training. Our review shows that the Army, Navy, and Air Force have not implemented this recommendation. DOD Military Entrance Processing Command officials told us that they believe data from the Chicago military entrance processing station for the first 6 months of fiscal year 2006 indicate that it may be possible to anticipate when irregularities may occur. While service data show that the numbers of irregularities that occur in the recruiting process are relatively small when compared with the total number of applicants that access into the military, the Chicago station data suggest that recruiter irregularities increase as the end of the monthly recruiting cycle nears and recruiting goals are tallied. The end-of-month recruiting cycle for the Army occurs midmonth and data from DOD’s Chicago processing station show that irregularities peaked at the midmonth point. Figure 3 illustrates the increase in recruiter irregularities that occurred at the Chicago station at the end of the Army’s monthly recruiting cycle. We present Army data because the Chicago station processes more applicants for the Army than it does for the other services. However, Chicago station data show similar results for the Navy, Marines, and Air Force. When we asked U.S. Military Entrance Processing Command officials for data from the other stations, they said that the other stations did not maintain these data and that this data collection effort was the initiative of the Chicago station commander. We believe these data can be instructive and inform recruiting command officials whether monthly goals have an adverse affect on recruiter behaviors, and if so, whether actions to address increases in irregularities near the end of the monthly recruiting cycle may be necessary. The services have standard procedures in place, provided in the Uniform Code of Military Justice and service regulations, to investigate allegations and service-identified incidents of recruiter irregularities and to prosecute and discipline recruiters found guilty of violating recruiting policies and procedures. Each service recruiting command has a designated investigative authority to handle allegations of irregularities, and the services’ respective Judge Advocates have primary responsibility for adjudicating criminal violations of the recruitment process. Moreover, each service has mechanisms by which to update its recruiter training as a result of information on recruiter irregularities. As previously discussed, the services identify allegations and service- identified incidents of recruiter wrongdoing in a number of ways. Allegations made or discovered at the Army Battalion, Navy and Marine Corps District, and Air Force Squadron command level are generally resolved by that commander using administrative actions and nonjudicial punishment under authority granted by the Uniformed Code of Military Justice. The commander forwards allegations and service-identified incidents of recruiter irregularities arising at that level that he or she deems sufficiently egregious to require further investigation, or as service regulations require, to the service recruiting command or to the Judge Advocate for judicial processing of possible criminal violations in the recruitment process. Commanders in the service recruiting commands, like all commanders throughout the military, exercise discretion in deciding whether a servicemember should be charged with an offense, just as prosecutors do in the civilian justice system. Army Battalion, Navy and Marine Corps District, and Air Force Squadron commanders initiate a preliminary inquiry into allegations of wrongdoing against recruiters after receiving a report of a possible recruiter irregularity. When the preliminary inquiry is complete, the commander must make a decision on how to resolve the case. The commander can decide that no action is warranted or take administrative action, such as a reprimand or counseling. The commander can also decide to pursue nonjudicial punishment under Article 15 of the Uniform Code of Military Justice, or refer the case to trial and decide what charges will be brought against the recruiter. Limitations in data we previously discussed prevent a thorough review of how services discipline recruiters found guilty of violating recruiting policies and procedures. In addition, we found that in some cases, the services did not document the disciplinary action a commander took against a recruiter. Even though service data are not complete, data the Army provided allow us to illustrate the range of disciplinary actions commanders may take to resolve cases of recruiter irregularities. These actions range from counseling a recruiter for an irregularity up to discharge from the Army. For example, in fiscal year 2005, Army data show that commanders imposed disciplinary actions ranging from a verbal reprimand to court martial for recruiters who concealed an applicant’s medical information. Service recruiting officials stated that the range of possible disciplinary actions a commander may impose is mitigated by the circumstances of each case, including the recruiter’s overall service record, duty performance, and number of irregularities the recruiter may have previously committed. Table 7 summarizes disciplinary actions taken against Army recruiters in the past 2 fiscal years for specific kinds of irregularities. All of the services have mechanisms for updating their recruiter training as a result of information on recruiter irregularities. These mechanisms include internal inspection programs and routine recruiter discipline reports. The services also react to reassure public confidence in the recruiting process when specific incidents or reports of recruiter irregularities become widely known. Each service recruiting command assesses and evaluates how recruiting policies and procedures are being followed, the results of which are focused on training at the Army Battalion, Navy and Marine Corps District, and Air Force Squadron command level. For example, the Navy Recruiting Command’s National Inspection Team conducts unannounced inspections at the Navy recruiting districts and forwards the results of the inspection to the Navy Recruiting Command headquarters. The Navy Recruiting Command’s National Training Team follows up by conducting refresher training at the recruiting station locations or in the subject areas where the National Training Team identified discrepancies. The Marine Corps’ National Training Team also conducts periodic inspections and training based on the results of their inspections. Additionally, the Marine Corps National Training Team provides input and guidance to the Marine Corps recruiter school course curriculum. The Air Force Recruiting Command Judge Advocate distributes quarterly recruiter discipline reports to heighten awareness of wrongdoing and encourage proper recruiter behavior. In addition, these reports are used to show examples of wrongdoing during new recruiter training. The Army Recruiting Command conducted commandwide refresher training on May 20, 2005, in response to a series of press reports of recruiters using inappropriate tactics in their attempts to enlist new servicemembers. The Army stated that the training goal was to reinforce that recruiting operations must be conducted within the rules and regulations and in accordance with Army values. Military recruiters represent the first point of contact between potential servicemembers and those who influence them—their parents, coaches, teachers, and other family members. Consequently, a recruiter’s actions can be far reaching. Although existing data suggest that the overwhelming number of recruiters are not committing irregularities and irregularities are not widespread, even one incident of recruiter wrongdoing can erode public confidence in DOD’s recruiting process. Existing data show, in fact, that allegations and service-identified incidents of recruiter wrongdoing increased between fiscal years 2004 and 2005. DOD, however, is not in a position to answer questions about these allegations and service-identified incidents because it does not know the true extent to which the services are tracking recruiter irregularities or addressing them. Moreover, DOD is unable to compile a comprehensive and consolidated report because the services do not use consistent terminology regarding recruiter irregularities. Individual service systems are not integrated, processes are decentralized, and many allegations are undocumented. Although DOD officials can point to external factors, such as a strong economy and current military operations in Iraq as recruiting challenges, data suggest that internal requirements to meet monthly recruiting goals may also contribute to recruiter irregularities. Having readily available, complete, and consistent data from the services would place DOD in a better position to know the nature and extent of recruiter irregularities and identify opportunities when corrective action is needed. To improve DOD’s visibility over recruiter irregularities, we recommend that the Secretary of Defense take the following action: Direct the Under Secretary of Defense for Personnel and Readiness to establish an oversight framework to assess recruiter irregularities and provide overall guidance to the services. To assist in developing its oversight framework, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to take the following three actions: Establish criteria and common definitions across the services for maintaining data on allegations of recruiter irregularities. Establish a reporting requirement across the services to help ensure a full accounting of all allegations and service-identified incidents of recruiter irregularities. Direct the services to develop internal systems and processes that better capture and integrate data on allegations and service-identified incidents of recruiter irregularities. To assist DOD in developing a complete accounting of recruiter irregularities, we further recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to take the following action: Direct the commander of DOD’s Military Entrance Processing Command to track and report allegations and service-identified incidents of recruiter irregularities to the Office of the Under Secretary of Defense for Personnel and Readiness. Such analysis would include irregularities by service and the time during the monthly recruiting cycle when the irregularities occur. In written comments on a draft of this report, DOD concurred with three of our recommendations that address the need for an effective oversight management framework to improve DOD's visibility over recruiter irregularities. While DOD partially concurred with our recommendation to establish a reporting requirement across the services and did not concur with our recommendation for the Military Entrance Processing Command to provide OSD with data on recruiter irregularities, the department did not disagree with the substance of these recommendations. Rather, DOD indicated that it would implement these recommendations if it determined such requirements were necessary. DOD's comments are included in this report as appendix II. DOD concurred with our recommendations to establish an oversight framework to assess recruiter irregularities and provide overall guidance to the services; to establish criteria and common definitions across the services for maintaining data on recruiter irregularities; and for the services to develop internal systems and processes that better capture and integrate data on recruiter irregularities. DOD partially concurred with our recommendation to establish a reporting requirement across the services to help ensure a full accounting of recruiter irregularities, but agreed that some type of reporting requirement be established. The department believes that implementing this recommendation may be premature until it has established an over-arching management framework to provide oversight that uses like terms for recruiter irregularities, and that the requirement and frequency should be left to the judgment of the Office of the Under Secretary of Defense for Personnel and Readiness. DOD stated its intent to establish an initial reporting requirement to ensure the processes it develops are functioning as planned and to use this time period to assess the severity of recruiter irregularities issues. DOD further stated that regardless of whether or not it establishes a fixed reporting requirement, the services will be required to maintain data on recruiter irregularities in a format that would facilitate timely and accurate reports upon request. We do not believe it would be premature to establish a reporting requirement at this time. As we stated in our report, data that the services reported to us show that the number of allegations, substantiated cases, and criminal violations all increased from fiscal year 2004 to fiscal year 2005. Without a reporting requirement, we believe it would be difficult for OUSD to identify trends in recruiter irregularities and determine if corrective action is needed. Accordingly, we continue to believe that a reporting requirement for the services would help the Office of the Under Secretary of Defense for Personnel and Readiness to carry out its responsibilities to review DOD's recruitment program to ensure adherence to approved policies and standards. The department did not concur with our recommendation for DOD's Military Entrance Processing Command to track and report allegations and incidents of recruiter irregularities to OUSD because it believed this reporting would duplicate service reporting, and added that we had stated that recruiter irregularities are not widespread. However, DOD acknowledged, as our report points out, that even one incident of recruiter wrongdoing can erode public confidence in the recruiting process and agreed to consider this recommendation at a later date if it determines that recruiter irregularities are a significant problem and further analyses are required. While we did conclude from the data services provided to us that recruiter wrongdoing did not appear to be widespread, we also stated our belief that service data likely underestimate the true number of recruiter irregularities, and further concluded that DOD is not in a position to answer questions about these allegations and service-identified incidents because it does not know the full extent to which the services are tracking recruiter irregularities or addressing them. We believe, therefore, that the significance of recruiter irregularities is not fully understood, and that addressing this recommendation should not be delayed. As we reported, Military Entrance Processing Command officials told us that they forward all allegations and service-identified incidents of recruiter irregularities that surface during the screening process at the military entrance processing stations to the services' recruiting commands. We found, however, that the services’ recruiting command headquarters data do not show records of allegations and service-identified incidents of recruiter irregularities received from the Military Entrance Processing Command. Data currently captured by the Military Entrance Processing Command would be instructive, particularly because these data show an increase in irregularities as Army recruiters approach the end of their monthly recruiting cycle, and we believe that these data would further inform DOD about the effectiveness of the oversight management framework it has agreed to establish. As arranged with your office, unless you publically announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies of this report to interested congressional members; the Secretaries of Defense, the Army, the Navy, and the Air Force; and the Commandant of the Marine Corps. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Should you or your staff have any questions regarding this report, please contact me at (202) 512-5559 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix III. To conduct our work, we examined Department of Defense (DOD) and military services’ policies, regulations, orders, and instructions that govern the recruitment of military servicemembers and the investigation and resolution of allegations and service-identified incidents of recruiter wrongdoing. We also reviewed recruiting-related reports issued by GAO, DOD, and the services. We analyzed data on allegations and service- identified incidents of recruiter irregularities from the active and reserve components of the Army, Navy, Marine Corps, and Air Force databases, reports, and individual paper files. Additionally, we interviewed individuals at several DOD and service offices and recruiters in each service, and visited a number of recruiting and recruiting-related commands. In the course of our work, we contacted and visited the organizations and offices listed in table 8. To assess the extent to which DOD and the services have visibility over recruiter irregularities, we examined DOD and service policies, procedures, regulations, and instructions related to recruiting. In addition, we interviewed officials in the Office of the Under Secretary of Defense for Personnel and Readiness and the services’ recruiting officials and Inspectors General to obtain an understanding of various aspects of the data DOD and the services collect on allegations and service-identified incidents of recruiting irregularities. We obtained data on recruiter irregularities from service recruiting commands’ Inspectors General or other designated recruiting command offices, the Headquarters Air Force Recruiting Service Basic Training Inspector General Liaison, the Naval Criminal Investigative Service, and the recruiting commands’ Staff Judge Advocates. Specifically, within each service, we analyzed fiscal years 2004 and 2005 data. For the Army, we obtained data on allegations and service-identified incidents of recruiter irregularities from its Recruiting Improprieties All Years database. We also obtained data on recruiting irregularities that were processed as criminal violations from the Army Recruiting Command Judge Advocate’s paper files. For the Navy, we obtained data on allegations and service-identified incidents of recruiter irregularities from the Naval Inspector General’s Case Management Information System, the Navy Bureau of Personnel Inspector General, the Navy Recruiting Command Inspector General’s paper files, and the Navy Recruiting Quality Assurance Team. We also obtained data on Navy recruiter criminal violations from the Navy’s Criminal Investigative Service. For the Marine Corps, we obtained data on allegations and service- identified incidents of recruiter irregularities from its Marine Corps Recruiting Information Support System. We also obtained data on recruiter criminal violations from the Navy’s Criminal Investigative Service data system. For the Air Force, we obtained data on allegations and service-identified incidents of recruiter irregularities from its Automated Case Tracking System and Trainee Tracking System, and data on criminal violations from its Automated Military Justice Administrative Management System. We also obtained data from the Air Force Reserve Command Recruiting Service’s Headquarters Queries database. To identify the factors within the current recruiting environment that may contribute to recruiting irregularities, we reviewed prior GAO work, Congressional Research Service reports addressing the recruiting environment, and the 2005 DOD Recruiter Quality of Life Survey Topline Report. We reviewed the sampling and estimation documentation for this survey and determined that it conforms to commonly accepted statistical methods for probability samples; the response rate for the DOD internal survey was 46 percent. Because DOD did not conduct a nonresponse bias analysis, we cannot determine whether estimates from this survey may be affected by nonresponse bias. Such bias might arise if nonrespondents’ answers to survey items would have been systematically different from those of respondents. We reviewed service policies and processes governing recruiter selection, training, and performance evaluation, and interviewed key service officials about the types of challenges that exist in the recruiting environment and the methods used to evaluate recruiter performance. Additionally, we gathered and analyzed statistical information from the Department of Labor and reviewed Military Entrance Processing Command data on the frequency and occurrence of applicant disqualifications by service and reports on recruiter irregularities. Finally, we interviewed officials at the U.S. Military Entrance Processing Command and two military entrance processing stations regarding recruiter irregularities. To identify what procedures DOD and the services have in place to address individuals involved in recruiting irregularities, we examined service case data and spoke with service recruiting command officials to determine how services imposed disciplinary action and what, if any, other actions they took to mitigate wrongdoing in the recruiting process. For each service, we obtained data on disciplinary actions imposed for cases of recruiter irregularities but specifically examined and analyzed Army data as they appeared to be the most comprehensive. We present these data for fiscal years 2004 and 2005. We also reviewed service regulations and the Uniform Code of Military Justice to understand departmentwide standards and the authorities that are granted to commanders to administer military justice. Finally, we reviewed service training materials and spoke with service recruiting command officials to identify other ways services use information on recruiter wrongdoing to try to mitigate errors and irregularities in the recruiting process. To assess the reliability of the services’ data on allegations and service- identified incidents of recruiter irregularities, we interviewed officials about the processes used to capture data on recruiter irregularities, the controls over those processes, and the data systems used; and we reviewed documentation related to those systems. Based on responses to our questions, follow-up discussions, and the documentation we reviewed, we found limitations in many service data systems, including reliance on paper files; databases that cannot be fully queried, if at all; and in some cases, lack of edit checks and data quality reviews. Although we identified weaknesses in the available data, we determined, for the purposes of this report, that the data were reliable for providing limited information on recruiter irregularities. In addition to those named above, David E. Moser, Assistant Director, Grace A. Coleman, Tanya Cruz, Nicole Gore, Gregg J. Justice III, Mitchell B. Karpman, Warren Lowman, Julia C. Matta, Charles W. Purdue, and Shana Wallace made key contributions to this report.
The viability of the All Volunteer Force depends, in large measure, on the Department of Defense's (DOD) ability to recruit several hundred thousand individuals each year. Since the involvement of U.S. military forces in Iraq in March 2003, several DOD components have been challenged in meeting their recruiting goals. In fiscal year 2005 alone, three of the eight active and reserve components missed their goals. Some recruiters, reportedly, have resorted to overly aggressive tactics, which can adversely affect DOD's ability to recruit and erode public confidence in the recruiting process. GAO was asked to address the extent to which DOD and the services have visibility over recruiter irregularities; what factors may contribute to recruiter irregularities; and what procedures are in place to address them. GAO performed its work primarily at the service recruiting commands and DOD's Military Entrance Processing Command; examined recruiting policies, regulations, and directives; and analyzed service data on recruiter irregularities. DOD and the services have limited visibility to determine the extent to which recruiter irregularities are occurring. DOD, for example, has not established an oversight framework that includes guidance requiring the services to maintain and report data on recruiter irregularities and criteria for characterizing irregularities and establishing common terminology. The absence of guidance and criteria makes it difficult to compare and analyze data across services and limits DOD's ability to determine when corrective action is needed. Effective federal managers continually assess and evaluate their programs to provide accountability and assurance that program objectives are being achieved. Additionally, the services do not track all allegations of recruiter wrongdoing. Accordingly, service data likely underestimate the true number of recruiter irregularities. Nevertheless, available service data show that between fiscal years 2004 and 2005, allegations and service-identified incidents of recruiter wrongdoing increased, collectively, from 4,400 cases to 6,500 cases; substantiated cases increased from just over 400 to almost 630 cases; and criminal violations more than doubled from just over 30 to almost 70 cases. The department, however, is not in a sound position to assure Congress and the general public that it knows the full extent to which recruiter irregularities are occurring. A number of factors within the recruiting environment may contribute to irregularities. Service recruiting officials stated that the economy has been the most important factor affecting recruiting success. Almost three-quarters of active duty recruiters responding to DOD's internal survey also believed that ongoing hostilities in Iraq made it hard to achieve their goals. These factors, in addition to the typical challenges of the job, such as demanding work hours and pressure to meet monthly goals, may lead to recruiter irregularities. The recruiters' performance evaluation and reward systems are generally based on the number of contracts they write for applicants to enter the military. The Marine Corps is the only service that uses basic training attrition rates as a key component of the recruiter's evaluation. GAO previously recommended that the services link recruiter awards and incentives more closely to applicants' successful completion of basic training. DOD concurred with GAO's recommendation, but has not made this a requirement across the services. The services have standard procedures in place, provided in the Uniform Code of Military Justice and service regulations, to investigate allegations of recruiter irregularities and to prosecute and discipline recruiters found guilty of violating recruiting policies and procedures. In addition, to help recruiters better understand the nature and consequences of committing irregularities in the recruitment process, all services use available information on recruiter wrongdoing to update their training.
You are an expert at summarizing long articles. Proceed to summarize the following text: The widespread use of U.S. currency abroad, together with the outdated security features of the currency, makes it a particularly vulnerable target for international counterfeiters. According to the Federal Reserve, the proportion of U.S. currency in circulation abroad has increased from 40 percent in 1970 to over 60 percent today. High foreign inflation rates and the relative stability of the dollar have contributed to the increasing use of U.S. currency outside the United States. And, in fact, the United States benefits from this international use. When U.S. currency remains in circulation, it essentially represents an interest-free loan to the U.S. government. The Federal Reserve has estimated that the U.S. currency held abroad reduces the need for the government to borrow by approximately $10 billion a year. Despite this benefit, its increasing international use has made U.S. currency a target for counterfeiting. Furthermore, with the exception of two changes introduced in 1990, the security features of the currency have not substantially changed since 1929, which has resulted in the U.S. dollar’s becoming increasingly vulnerable to counterfeiting. (See fig. 1 for the existing security features of the currency.) Congressional groups and the media have continued to highlight their concerns that the counterfeiting of U.S. currency abroad is becoming an increasingly serious problem. Concerns about counterfeiting abroad were heightened in 1992 with the issuance of the first of two reports by the House Republican Research Committee’s Task Force on Terrorism and Unconventional Warfare. These reports charged that a foreign government was producing a very high-quality counterfeit note, commonly referred to as the Superdollar, to support terrorist activities. In 1993, the House Appropriation Committee’s Surveys and Investigations staff completed a report on the Secret Service’s counterfeiting deterrence efforts and briefed the House Appropriations Committee. In the same year, a bill—the International Counterfeiting Deterrence Act—was introduced to address international counterfeiting and economic terrorism; however, it was not passed. The Secretary of the Treasury is responsible for issuing and protecting U.S. currency. Treasury, including the Secret Service and the Bureau of Engraving and Printing, and the Federal Reserve have primary responsibilities for combating the counterfeiting of U.S. currency. The Secret Service conducts investigations of counterfeiting activities and provides counterfeit-detection training. The Bureau of Engraving and Printing designs and prints U.S. currency, which includes the incorporation of security features into the currency. The Federal Reserve’s role is to distribute and ensure the physical integrity of U.S. currency. It receives currency from financial institutions around the world and uses specialized counting and verification machines to substantiate the authenticity of all U.S. currency received. The various counterfeiting deterrence efforts are coordinated through the Advanced Counterfeit Deterrence Steering Committee, which was formed in 1982. The Secret Service is the U.S. agency responsible for anticounterfeiting efforts abroad. At the time of our work, the Secret Service primarily used its six overseas offices, three task forces, two temporary operations, and resources from six domestic offices to conduct this task. (See app. I for a description of Secret Service offices responsible for locations abroad.) Secret Service offices outside the United States typically are staffed by one to six agents. Agents working abroad are involved in the same issues as their domestic counterparts, such as detecting counterfeits, investigating financial crimes, and protecting dignitaries. However, the majority of a typical agent’s time abroad is spent on counterfeiting deterrence efforts. In pursuing these efforts, agents must rely on the cooperation of foreign law enforcement agencies and sometimes are allowed to provide only investigative support. This situation is different from that in the United States, where agents have direct investigative authority. The Secret Service also provides other staff to support international counterfeiting deterrence activities. For example, the Secret Service has assigned two Counterfeit Division staff to work with the Four Nations Group and three agents to work with Interpol—the International Criminal Police Organization. To obtain information on the nature and extent of counterfeiting of U.S. currency abroad, as well as U.S. efforts to combat this activity, we obtained views and material from (1) U.S. government agencies in the United States and abroad; (2) foreign law enforcement and financial organization officials in seven European countries, as referred to us by U.S. embassy officials; (3) Interpol officials in the United States and abroad; and (4) individuals researching the Superdollar case, including the author of the House Republican Task Force on Terrorism and Unconventional Warfare reports on the Superdollar. We performed our review in the United States, England, France, Italy, Germany, Hungary, Poland, and Switzerland. Interpol, State Department, and Secret Service officials recommended these countries for our review on the basis of their knowledge of counterfeiting abroad. To obtain U.S. government perspectives on the nature and extent of counterfeiting as well as on efforts to deter this activity, we interviewed and obtained documentation from senior Treasury officials in Washington, D.C.; Secret Service officials in Washington, D.C.; New York, New York; San Francisco, California; England; France; Italy; and Germany; and Bureau of Engraving and Printing officials in Washington, D.C. We also interviewed Federal Reserve Board officials in Washington, D.C.; Federal Reserve Bank officials in San Francisco and New York; and State Department officials in Washington, D.C., and abroad. To secure information on the extent of the problem of counterfeit U.S. currency abroad, we obtained Secret Service data on domestic and international counterfeit detections. We then reviewed the Secret Service’s counterfeit-detection data for fiscal year 1987 through fiscal year 1994. We also reviewed Interpol’s 1991 to 1993 annual reports on international counterfeiting activity. We did not independently verify the accuracy of the data that the Secret Service and Interpol provided. To gain perspective on both counterfeiting and deterrence efforts abroad, we obtained input from foreign law enforcement and financial organization officials in the countries we visited. (See app. II for a listing of foreign agencies and organizations we contacted while abroad.) In conducting our interviews, we did not pose the same questions to all officials. Thus, the responses we obtained cannot be generalized. The scope of our work was limited by a number of factors related to national security and investigative concerns. First, due to the criminal nature of counterfeiting, the actual extent of counterfeiting abroad cannot be determined. Second, since current known counterfeiting activities involved ongoing investigations, we were not able to fully explore and discuss these investigations with law enforcement and intelligence officials. Third, due to the sensitive nature of the ongoing investigation of the so-called Superdollar, we were unable to fully explore this extremely high-quality, allegedly foreign government-sponsored, counterfeiting operation. As a result of these limitations, this report is not evaluative, and it thus contains no conclusions or recommendations. This report was prepared using unclassified sources of information. The draft report underwent a security classification review by the appropriate agencies, including Treasury and the Secret Service, and was released as an unclassified report. Although they initially stated that some of the information was or should have been classified, Treasury and the Secret Service later rescinded this statement after they performed a full security classification review and we reached agreement with them on a minor revision to appendix VII. (See app. VIII, pages 66 and 67, for Treasury and Secret Service statements that the report is unclassified.) We conducted our review from September 1994 to May 1995 in accordance with generally accepted government auditing standards. In June and then again in November 1995, we updated our work on Secret Service staffing abroad. We obtained written agency comments on a draft of this report from the Departments of the Treasury and State and from the Federal Reserve. These comments are discussed at the end of this report and presented in appendixes VIII through X. The nature of counterfeiting of U.S. currency abroad is diverse, including various types of perpetrators, uses, and methods. The relative sophistication of the counterfeiter and method used results in counterfeit notes of differing quality. According to a National Research Council report requested by Treasury, the counterfeiting problem will increase as technologies improve and are made more accessible to the public. Already, the Secret Service has been troubled by some very high-quality counterfeits of U.S. currency identified as originating abroad. Perpetrators include both the casual and the professional counterfeiter. The casual counterfeiter is a person who commits the crime because it is convenient or easy to do. For example, an office worker may use a copying machine to counterfeit U.S. currency. The number of casual counterfeiters is expected to increase with the greater accessibility of and improvements to modern photographic and printing devices, according to the National Research Council report. Conversely, the professional counterfeiter may be a member of a gang, criminal organization, or terrorist group. Foreign law enforcement and Secret Service officials that we interviewed told us of suspected links between counterfeiting and organized crime. Counterfeit U.S. currency is used for economic gain and is sometimes linked to other crimes. According to foreign law enforcement and Secret Service officials, counterfeit U.S. currency is sometimes distributed in conjunction with drug trafficking, illicit arms deals, and other criminal and/or terrorist activities. Moreover, Secret Service and foreign law enforcement officials told us that counterfeit U.S. currency is now sometimes produced by counterfeiters in one country for export to another country. For example, in Milan, Italy, counterfeiting has become an industry in which counterfeit U.S. currency is produced for export, according to Italian law enforcement officials. They added that the counterfeits typically were exported to the former Soviet Union and Eastern Europe. The methods used by counterfeiters of U.S. currency abroad are the same as those used within the United States, according to Secret Service officials. Common techniques include using black and white, monochromatic, or color photocopiers; cutting and taping or gluing numerals from high denomination notes to the corners of a note of lower denomination, also known as making “raised notes”; using sophisticated computers, scanners, and laser or ink jet printers; bleaching good notes and reprinting higher denominations on the genuine paper; and using photomechanical or “offset” methods to make a printing plate from a photographic negative of a genuine note. Depending upon the sophistication of the counterfeiter and the method used, the quality of counterfeit notes can vary a great deal. The Secret Service has found good, fair, and poor quality notes for each method used. For example, a good color copier-produced note could be better than a poor ink jet-produced note. However, the offset printing method generally results in the highest quality counterfeits, whether produced abroad or domestically. (See app. III for descriptions of common methods used and some examples of counterfeit notes.) Recently, very sophisticated counterfeiters have been producing very high-quality notes using the offset process. High-quality counterfeit notes are difficult for the general public to discern, but according to Federal Reserve officials, the notes can be detected by experienced bank tellers. (See app. IV for case examples of high-quality counterfeit notes produced in Canada, Colombia, and the Middle East.) The criminal nature of the activity precludes determination of the actual extent to which U.S. currency is being counterfeited abroad. The best data available to reflect actual counterfeiting are Secret Service counterfeit-detection data. However, these data have limitations and thus provide only a limited measure of the extent of counterfeiting activities. Use of these data should be qualified to reflect these limitations so that conclusions reached using the data do not mislead. Overall, detected counterfeits have represented a minuscule amount of the currency in circulation. According to Secret Service officials, the data that they gathered was supplemented by intelligence information and field experience to demonstrate an increase in counterfeiting activity abroad. However, our analysis of the same counterfeit-detection data proved inconclusive. Moreover, foreign officials’ views about the seriousness of the problem of counterfeit U.S. currency were mixed. Foreign financial organization and law enforcement officials that we interviewed reported no significant numbers of chargebacks and few reported instances of U.S. currency not being accepted abroad. On the basis of the number of Secret Service counterfeit-detections, Treasury officials concluded that counterfeiting of U.S. currency was economically insignificant and thus did not pose a threat to the U.S. monetary system. According to Secret Service and Treasury officials, detected counterfeits represented a minuscule portion of U.S. currency in circulation. Secret Service and Federal Reserve data showed that, in fiscal year 1994, of the $380 billion in circulation, $208.7 million had been identified as counterfeit notes, a figure which represented less than one one-thousandth of the currency in circulation. However, while Treasury and Secret Service officials agreed that, overall, counterfeiting was not economically significant, they considered any counterfeiting to be a serious problem. The Secret Service reported that counterfeiting of U.S. currency abroad was increasing. It used counterfeit-detection data, supplemented with intelligence information and field experience, to support this claim. It also employed two counterfeit-detection data measures to illustrate the extent of counterfeiting abroad: (1) counterfeit-detections abroad and (2) domestic detections of counterfeits that were produced abroad. Counterfeits detected abroad are categorized as “appearing abroad,” while counterfeits detected domestically are divided into two separate categories. Domestic detections of counterfeits not yet in circulation are called “seizures,” and those counterfeits detected while in circulation are called “passes.” The Secret Service has reported a significant recent increase in detections of counterfeit U.S. currency abroad. In one analysis, it reported that the amount of counterfeit currency detected abroad increased 300 percent, from $30 million in fiscal year 1992 to $121 million in fiscal year 1993, thereby surpassing domestic detections in the same period (see fig. 2). The Secret Service has also reported that, in recent years, a larger dollar amount of the notes detected as domestic passes has been produced outside the United States. Since 1991, the dollar amount of counterfeit U.S. notes detected while in circulation and produced abroad has exceeded the dollar amount of those produced domestically (see fig. 3). In fiscal year 1994, foreign-produced notes represented approximately 66 percent of total domestic passes detected. The true dimensions of the problem of counterfeiting of U.S. currency abroad could not be determined. Treasury and the Secret Service use Secret Service counterfeit-detection data to reflect the actual extent of counterfeiting. However, although these data are the best available, they have limitations. Specifically, they are incomplete and present only a partial picture of counterfeiting. If these limitations are not disclosed, the result may be misleading conclusions. First of all, the actual extent of counterfeiting could not be measured, primarily because of the criminal nature of this activity. Secret Service data record only those detections that are reported to the Secret Service; they do not measure actual counterfeiting. As a result, the data provide no information about the number of counterfeiters operating in any given year or the size and scope of their operations. More importantly, these data could not be used to estimate the volume of counterfeit currency in circulation at any point in time. In the case of counterfeit currency appearing abroad, reasons for this include the following: (1) the data do not distinguish between how much counterfeit currency was seized and how much was passed into circulation; (2) they could not provide information about how long passed counterfeits remained in circulation before detection; and (3), most critically, they provide no indication of how much counterfeit currency was passed into circulation and not detected. Second, counterfeit detection data may in part be a reflection of where the Secret Service focuses its efforts. Use of these data thus may not identify all countries with major counterfeiting activity, but simply countries where agents focused their data collection efforts. For example, in fiscal year 1994, almost 50 percent of detections abroad occurred in the six countries where the Secret Service was permanently located. In other countries, counterfeit-detection statistics tend to be more inconsistent. For example, in fiscal year 1994, certain African and Middle Eastern countries reported no counterfeiting activity to the Secret Service. This lack of reported detections, however, does not necessarily indicate that counterfeiting activity did not occur in these countries. Third, detection data for high-quality notes may be underreported. The Secret Service has said that, because so few Superdollars have been detected, this indicates that there are not many in circulation. However, according to the Task Force on Terrorism and Unconventional Warfare report, the majority of Superdollars are circulating outside the formal banking system and therefore would not be reported to the Treasury if detected. Also, as we discovered on our overseas visits, many foreign law enforcement and financial organization officials had inconsistent and incomplete information on how to detect the Superdollar. Thus, financial institutions abroad may be recirculating the Superdollars. Fourth, reported increases in counterfeiting abroad, as supported by Secret Service detection data, may be due to a number of factors other than increased counterfeiting activity. For example, in 1993, the Secret Service changed its reporting practices abroad to be more proactive in collecting counterfeit-detection data. Instead of relying solely on reports from foreign officials, agents abroad began to follow up on Interpol reports and intelligence information in order to collect additional data. Also, according to Treasury officials, foreign law enforcement officials have improved their ability to detect counterfeit U.S. currency and report it to the Secret Service. Furthermore, although domestic reporting and detection practices have been more consistent, the increase in domestic detections of counterfeits produced abroad is also subject to interpretation. For example, rather than foreign-produced notes increasing, it is possible that the Secret Service’s ability to determine the source of counterfeit currency has simply improved over time. Fifth and finally, counterfeit-detection data fluctuate over time, and one large seizure can skew the data, particularly for detections abroad. For detections outside the United States, the Secret Service has relied heavily on information provided by foreign law enforcement organizations, and has obtained little information from financial organizations. Thus, counterfeit detections “appearing abroad” have primarily been seizures reported by foreign law enforcement organizations, and the size of these seizures can have a significant impact on detection data. For example, according to the Secret Service, several large seizures accounted for the jump from $14 million in counterfeit detections abroad in fiscal year 1988 to $88 million in fiscal year 1989. The following year, the data indicated a significant drop in detections (see fig. 2). Overseas law enforcement and financial organization officials’ views on the extent of the problem of counterfeit U.S. currency varied. Foreign law enforcement officials tended to be more concerned about counterfeit U.S. currency than foreign financial organization officials. Financial organization officials we met with said that they had experienced minimal chargebacks, and most expressed confidence in the ability of their tellers to detect counterfeits. Furthermore, we heard few reports from foreign financial organization and foreign law enforcement officials about U.S. currency not being accepted overseas because of concerns about counterfeiting. Most foreign law enforcement officials we spoke with believed that the counterfeiting of U.S. currency was a problem, but their opinions on the severity of the problem differed. While the Swiss, Italian, and Hungarian law enforcement officials said that it was a very serious problem, French and English law enforcement officials said that the problem fluctuated in seriousness over time; German, French, and Polish officials said that the counterfeiting of U.S. currency was not as serious a problem as the counterfeiting of their own currencies. Some of these law enforcement officials expressed concern over increases in counterfeiting in Eastern Europe and the former Soviet Union. Some also expressed particular worry about their ability, and the ability of financial organizations in their countries, to detect the Superdollar. Conversely, most foreign financial organization officials we spoke with were not concerned about the counterfeiting of U.S. currency. Of the 34 organizations we visited in 7 countries, officials from 1 Swiss and 1 French banking association and 2 Hungarian banks viewed the counterfeiting of U.S. currency as a current or increasing problem. According to other foreign financial organization officials, they were not concerned about U.S. counterfeiting activity because it did not have a negative impact on their business. For example, none of the 16 financial organization officials with whom we discussed chargebacks told us that they had received substantial chargebacks due to counterfeit notes that they had failed to detect. In addition, some of these officials cited other types of financial fraud and the counterfeiting of their own currency as more significant concerns. For example, officials from one French banking association were more concerned with credit card fraud, and officials from two financial organizations in Germany and one financial organization in France said counterfeiting of their country’s currency was a greater problem. Furthermore, foreign financial organization officials we spoke with were confident about their tellers’ ability to detect counterfeits and, in some countries, tellers were held personally accountable for not detecting counterfeits. In most of the countries we visited, detection of counterfeit U.S. currency relied on the touch and sight of tellers, some of whom were aided by magnifying glasses or other simple detection devices, such as counterfeit detection pens. Other counterfeit-detection devices used abroad, like ultraviolet lights, did not work effectively on U.S. currency. While foreign financial organizations appeared confident of their tellers’ ability to detect counterfeits, some of these organizations had incomplete information on how to detect counterfeit U.S. currency, particularly the Superdollar. Finally, foreign financial organization and law enforcement officials provided a few isolated cases in which U.S. currency was not accepted abroad. For example, when it first learned about the Superdollar, one U.S. financial organization in Switzerland initially stopped accepting U.S. $100 notes, although it later resumed accepting the U.S. notes from its regular customers. Also, Swiss police, Hungarian central bank, and French clearing house officials reported that some exchange houses and other banks were not accepting $100 notes. We were unable to confirm these reports. However, a State Department official commented that, because drug transactions tended to involve $100 notes, some foreigners were reluctant to accept this denomination, not because of counterfeiting concerns, but rather because of the notes’ potential link to money laundering. The U.S. government, primarily through the Treasury Department and its Secret Service and the Federal Reserve, has been increasing its counterfeiting deterrence efforts. These recent efforts include redesigning U.S. currency; increasing exchanges of information abroad; augmenting the Secret Service presence abroad; and undertaking efforts to stop production and distribution of counterfeit currency, including the Superdollar. In an effort to combat counterfeiting both domestically and abroad, Treasury is redesigning U.S. currency to incorporate more security features intended to combat rapid advances in reprographic technology. This change, the most significant to the U.S. currency in over 50 years, is, according to some U.S. and foreign officials, a long overdue one. The redesigned currency is planned for introduction in 1996 starting with changes to the $100 note, with lower denominations to follow at 9- to 12-month intervals. According to Treasury officials, the currency redesign will continue, becoming an ongoing process, because no security features are counterfeit-proof over time. These officials also said that the old currency would not be recalled and would retain its full value. Moreover, Treasury is leading a worldwide publicity campaign to facilitate introduction of the redesigned currency, ensure awareness and use of the overt security features, and assure the public that the old currency will retain its full value. Through this campaign, the Federal Reserve hopes to encourage the public to turn in old currency for the redesigned notes. (See app. V for further information on the currency redesign.) In addition, the Secret Service, through its team visits abroad in company with Treasury Department and Federal Reserve officials, has both gathered further information on counterfeiting and provided counterfeit-detection training. As of May 1995, the team had met with law enforcement and financial organization officials in Buenos Aires, Argentina; Minsk, Belarus; London, England; Zurich, Switzerland; Hong Kong; and Singapore. According to Secret Service officials, their visits were successful because they were able to develop better contacts, obtain further information about foreign financial institutions’ practices, learn more about tellers’ ability to detect counterfeits, and provide counterfeit detection training seminars for both law enforcement and financial organization officials. Future trips were planned to Russia and possibly the Middle East. Further, the Secret Service has been attempting to increase its presence abroad, although it has encountered difficulties in obtaining approval. The Secret Service has over 2,000 agents stationed in the United States, but it has fewer than 20 permanent positions abroad. The Secret Service first requested additional staff in February 1994 for permanent posting abroad beginning in fiscal year 1996. However, due to uncertainties over the funding of the positions as well as to other priorities within the Treasury Department, as of June 21, 1995, the Secret Service had secured approval for only 6 of 28 requested positions abroad. Subsequent to our discussions with the Secret Service, Treasury, and State, on July 21, 1995, Treasury approved the remainder of the positions and passed them on for State’s approval. As of November 30, 1995, the respective chiefs of mission had approved only 13 of the 28 positions, and only 1 agent had reported to his post abroad. (See app. VI for further information on increasing the Secret Service presence abroad.) Additionally, the U.S. government has undertaken special efforts to eradicate the highest quality counterfeit note—the Superdollar. These efforts include the use of task forces and diplomatic efforts among senior policy-level officials of the U.S. and certain foreign governments. Due to the sensitivity and ongoing nature of this investigation, we were made generally aware of these efforts but not provided with specific information. (See app. VII for further information on U.S. efforts to eradicate the Superdollar.) The Department of the Treasury, including the Secret Service, the Department of State, and the Federal Reserve provided written comments on a draft of this report. (See apps. VIII, IX, and X.) These comments included technical changes and/or factual updates that have been incorporated where appropriate. However, Treasury, including the Secret Service, also raised and later rescinded issues of security classification and sensitivity and did not fully agree with our characterization of the limitations of the Secret Service counterfeit currency detection data and other supporting methods for estimating trends in counterfeiting. In their comments, Treasury and the Secret Service made frequent reference to activities that they believed provided additional support for the conclusions they drew from the detection data. These activities included contacts with foreign law enforcement and financial organization officials, vault inspections of banks abroad, and analysis of Federal Reserve data. Although the Secret Service recognized the limitations of its counterfeit currency detection data, Treasury and Secret Service conclusions provided in hearings and reports have not always reflected these limitations. Thus, in this report, we discuss the data limitations and conclude that any use of the data should be qualified to recognize these limitations. Although the Secret Service has the best counterfeit-detection data available, this does not negate the potential for the limitations of this data to foster misleading conclusions. First, the actual extent of counterfeiting cannot be determined because of the criminal nature of this activity. Second, counterfeit-detection data may be a reflection of where the Secret Service focuses its efforts. Third, detection data for high-quality notes, which may more easily circumvent detection and reporting abroad, may be even less representative of the actual extent of the problem. Fourth, increases in counterfeiting detections abroad may be due to a number of factors other than increased counterfeiting, such as improved information gathering and reporting. Also, counterfeit-detection data fluctuate over time, and one large seizure abroad can skew the data. We acknowledge in this report that the Secret Service supplements its detection data with intelligence information and field experience. Even though we did not evaluate these specific methods, our work did yield some information on these activities. With regard to Treasury and Secret Service contacts with foreign law enforcement and financial organization officials, in our discussion of additional U.S. counterfeit currency deterrence efforts, we acknowledge that Treasury and Secret Service officials have recently increased their contacts with foreign financial organizations in preparation for the U.S. currency redesign effort. However, almost all of the foreign financial organization officials we met with in September 1994 had had little or no contact with Treasury and/or Secret Service officials before that time. Regarding vault inspections of banks abroad, Secret Service officials initially told us that they were conducting vault inspections during their joint team visits with Treasury and Federal Reserve officials. However, according to Federal Reserve officials, and as subsequently confirmed by Secret Service officials, vault inspections had been conducted in only one of the six locations visited during our review. Secret Service officials told us that the inspections had been conducted in Argentina but were then discontinued because of the limited results obtained there. The officials told us that the inspections might be reinstituted in other countries if it was subsequently decided that the effort was warranted. Finally, regarding the use of Federal Reserve data, the Secret Service and the Federal Reserve confirmed that the Federal Reserve data were actually a component of the Secret Service data, and thus were effectively addressed in our evaluation of the Secret Service data. As agreed with your office, unless you publicly announce the contents earlier, we plan no further distribution until 30 days from the date of this report. At that time, we will provide copies of the report to interested congressional committees, to the Departments of the Treasury and State, and to the Federal Reserve. We will also make copies available to others on request. Please contact me at (202) 512-8984 if you have any questions concerning this report. Other major contributors to this report are listed in appendix XI.
Pursuant to a congressional request, GAO provided information on counterfeiting of U.S. currency abroad and U.S. efforts to deter these activities. GAO found that: (1) counterfeit U.S. currency is used for economic gain and illegal activities, such as drug trafficking, arms sales, and terrorist activity; (2) there are several techniques used to counterfeit U.S currency, including photocopying, the raised note technique, computer assisted printing, bleaching and reprinting, and photomechanics; (3) the offset printing method offers the highest quality of counterfeit notes and can only be detected by experienced bank tellers; (4) it is difficult to determine the extent of counterfeiting abroad because of the lack of accurate counterfeit detection data and foreign officials reluctance to view counterfeiting as a serious problem; (5) of the $380 billion in U.S. currency circulated in fiscal year 1994, $208.7 million was counterfeit, which represented less than one one-thousandth of U.S. currency in circulation at that time; and (6) the U.S. government is involved in various counterfeit deterrence activities, including redesigning U.S. currency, increasing the presence of the Secret Service and the exchange of information abroad, and seizing the production and distribution capabilities used in counterfeiting of U.S. currency.
You are an expert at summarizing long articles. Proceed to summarize the following text: Protections for workers in the United States were enacted in the Fair Labor Standards Act of 1938, which established three basic rights in American labor law: a minimum wage for industrial workers that applied throughout the United States; the principle of the 40-hour week, with time- and-a-half pay for overtime; and a minimum working age for most occupations. Since 1938, the act has been amended several times, but the essentials remain. For many years, the act (combined with federal and state legislation regarding worker health and safety) was thought to have played a major role in eliminating sweatshops in the United States. However, we reported on the “widespread existence” of sweatshops within the United States in the 1980s and 1990s. Subsequent to our work, in August 1995, the Department of Labor and the California Department of Industrial Relations raided a garment factory in El Monte, California, and found sweatshop working conditions—workers were confined behind razor wire fences and forced to work 20 hours a day for 70 cents an hour. Leading retailers were found to have sold clothes made at this factory. According to the National Retail Federation, an industry trade association, the El Monte raid provoked a public outcry and galvanized the U.S. government’s efforts against sweatshops. Concern in the United States about sweatshops has spread from its shores to the overseas factories that supply goods for U.S. businesses and the military exchanges. With globalization, certain labor-intensive activities, such as clothing assembly, have migrated to low-wage countries that not only provide needed employment in those countries but also provide an opportunity for U.S. businesses to profit from manufacturing goods abroad and for consumers to benefit from an increasing array of quality products at low cost. Various labor issues (such as child labor, forced overtime work, workplace health and safety, and unionization) have emerged at these factories. In May 2000, for example, the Chentex factory in Nicaragua—which produces much of the Army and Air Force exchange’s private label jeans and denim product—interfered in a wage dispute involving two labor groups, firing the union leaders of one of the groups. Subsequently, much publicity ensued over working conditions at this factory. International labor rights were defined in the Trade Act of 1974 as the right of association; the right to organize and bargain collectively; a prohibition on the use of any form of forced or compulsory labor; a minimum age for the employment of children; and acceptable conditions of work with respect to minimum wages, hours of work, and occupational safety and health. As globalization progressed, U.S. government agencies, nongovernmental organizations, industry associations, retailers, and other private organizations began addressing worker rights issues in overseas factories. For example, the International Labor Organization, a United Nations specialized agency that formulates international policies and programs to help improve working and living conditions, has endorsed four international labor principles: (1) freedom of association and the effective recognition of the right to collective bargaining, (2) the elimination of all forms of forced or compulsory labor, (3) the effective abolition of child labor, and (4) the elimination of discrimination in employment. Appendix II provides additional information on governmental agencies’, nongovernmental organizations’, and industry associations’ efforts to address worker rights in overseas factories. The military exchanges are separate, self-supporting instrumentalities of the United States located within the Department of Defense (DOD). The Federal Acquisition Regulation, the Defense Federal Acquisition Regulation supplement, and component supplements do not apply to the merchandise purchased by the exchanges and sold in their retail stores, since the purchases are not made with appropriated funds. The Assistant Secretary of Defense (Force Management Policy) is responsible for establishing uniform policies for the military exchanges’ operations. The exchanges are managed by the Army and Air Force Exchange Service (AAFES), the largest exchange, and by the Navy Exchange Service Command (Navy Exchange) and Marine Corps Community Services (Marine Corps Exchange). The exchanges operate retail stores similar to department stores selling apparel, footwear, household appliances, jewelry, cosmetics, food, and other merchandise. For the past several years, about 70 percent of the exchanges’ earnings from these sales revenues were allocated to morale, welfare, and recreation activities— libraries, sports programs, swimming pools, youth activities, tickets and tour services, bowling centers, hobby shops, music programs, outdoor facilities, and other quality of life improvements for military personnel and their families—and about 30 percent to new exchange facilities and related capital projects. The number of retail locations and the annual revenues and earnings reported by the exchange services for 1999 and 2000 are shown in table 1. The exchanges have created private label products, which generally carry their own name or a name created exclusively for the exchange. The exchanges began creating private labels in the mid-1980s to provide lower prices for customers, to obtain higher earnings margins for the exchanges, and to remain competitive with major discount retailers. Private labels are profitable for retailers because their costs do not include marketing, product development, or advertising, which are used by companies to position national brands in the marketplace and to maintain the market share. In 2000, AAFES reported purchases of $44.8 million in private label merchandise from overseas companies, and the Navy Exchange reported purchases of $11.6 million in private label merchandise from importers.The Marine Corps Exchange only recently created its private label and did not purchase any private label merchandise from importers or overseas companies in 2000, but it reported purchases of about $350,000 of AAFES’ and the Navy Exchange’s private label merchandise for resale in its stores. The private label goods sold by the military exchanges are shown in table 2. The retailers we contacted in the private sector are more proactive about identifying working conditions than the military exchanges. They periodically requested that suppliers provide a list of overseas factories and subcontractors that they used to make the retailers’ private label merchandise, administered questionnaires on working conditions, visited factories, and researched labor issues in the countries where prospective factories are located. The military exchanges largely rely on their suppliers to identify and address working conditions in overseas factories that manufacture the exchanges’ private label merchandise. The exchanges generally did not maintain the names and locations of the relevant overseas factories. The exchanges assumed that their suppliers and other U.S. government agencies, such as U.S. Customs Service, ensured that labor laws and regulations that address working conditions and minimum wages were followed. The 10 leading private sector retailers we contacted are more active in identifying working conditions than the military exchanges for a variety of reasons, ranging from a sense of social responsibility to pressure from outside groups and a desire to protect the reputation of their companies’ product lines. These retailers periodically requested that overseas suppliers provide a list of factories and subcontractors that they used to make the retailers’ private label merchandise. Some retailers we contacted terminated a business relationship with suppliers that used a factory without disclosing it to the retailers. For example, JCPenney’s purchase contracts stipulate that failure by a supplier or one of its contractors to identify its factories and subcontractors may result in JCPenney’s taking the following actions: seeking compensation for any resulting expense or loss, suspending current business activity, canceling outstanding orders, prohibiting the supplier’s subsequent use of the factory, or terminating the relationship with the supplier. JCPenney officials told us that they have terminated suppliers for using unauthorized subcontractors. Some retailers that we interviewed, such as The Neiman Marcus Group, Inc., JCPenney, and Liz Claiborne, Inc., developed a company questionnaire, which they had factory management complete. The questionnaire addressed health and safety issues and whether U.S. or foreign government agencies had investigated the factory. The retailers used the questionnaire to provide factories with feedback on their compliance with the retailers’ standards and for the retailer to provide the factory an opportunity to make improvements in working conditions before an inspection. The representatives of these retailers told us that they visited factories to verify the accuracy of the factories’ answers to the questionnaire before ordering merchandise. Each of the 10 retailers we contacted told us they also used information on human rights issues that was either developed internally or was available from government agencies and nongovernmental organizations to assess labor issues in the countries where the factories are located. This included the Department of State’s annual Country Reports on Human Rights Practices (a legislatively mandated report to Congress that covers worker rights issues in 194 countries), which the retailers frequently cited as a source for identifying labor issues in a particular country. Most retailers also used information obtained from the United Nations; U.S. Department of State; U.S. Customs Service; U.S. Department of Labor; and nongovernmental organizations, such as Amnesty International. The retailers we contacted used this information in their assessments of suppliers to avoid business arrangements with factories in areas with a higher risk of labor abuses. In addition, some of the retailers told us that their decisions to buy merchandise made in a particular country sometimes depended on whether they could improve factory conditions in a country. For example, companies such as Levi Strauss & Co. used only those Chinese factories that corrected problem conditions, an approach supported by the officials we met at the Departments of State and Labor. The military exchanges’ methods for identifying working conditions in overseas factories that manufacture their private label merchandise are not as proactive as the methods employed by companies in the private sector. Only the Army and Air Force Exchange knew the identity of the factories that manufactured its private label merchandise, and none of the exchanges knew the nature of working conditions in these factories. Instead, they assumed that their suppliers and other government agencies ensured good working conditions. While the exchanges have sent letters to some suppliers describing their expectations of compliance with labor laws and regulations that address working conditions and minimum wages in individual countries, they have not taken steps to verify that overseas factories are in compliance or otherwise acted to determine the status of employee working conditions; instead, they assumed that their suppliers and other government agencies ensured good working conditions. For example, the Navy Exchange and the Marine Corps Exchange do not routinely maintain the name and location of the overseas factories that manufactured their merchandise because they rely on brokers and importers to acquire the merchandise from the overseas factories. The AAFES Retail Business Agreement requires suppliers to promptly provide subcontractors’ name and manufacturing sites upon request. But because it had no program to address working conditions in overseas factories, AAFES has not requested this information, except for the suppliers it used for its private label apparel, and then only to check on the quality of the merchandise being manufactured. AAFES’ records show that in fiscal year 2000, its private label apparel was manufactured in 70 factories in 18 countries and territories, as shown in table 3. In some cases, the exchanges’ private label merchandise was manufactured in countries that have been condemned internationally for their human rights and worker rights violations. For example, at 9 of the 10 retailers we contacted, officials told us that they had ceased purchasing from Myanmar (formerly Burma) in the 1990s because of reports of human rights abuses documented by governmental bodies, nongovernmental organizations, and the news media; at one retailer we contacted, officials told us that they had ceased purchasing from Myanmar in 2000 for the same reasons. In contrast, during 2001, each exchange purchased private label apparel made in Myanmar. For the most part, the exchanges assume compliance with laws and regulations that address child or forced labor in the countries where their factories are located instead of determining compliance. In 1996, for example, following the much publicized El Monte, California, sweatshop incident, the Navy Exchange notified all of its suppliers by letter that it expected its merchandise to be manufactured without child or forced labor and under safe conditions in the workplace, but it did not attempt to determine whether these suppliers and their overseas factories were willing and able to meet these expectations. The Navy Exchange and Marine Corps Exchange relied solely on their suppliers to address working conditions in the factories. Similarly, AAFES’ management officials told us that they assumed that their suppliers were in compliance with applicable laws and regulations by virtue of their having accepted an AAFES purchase order. According to these management officials, when suppliers accept a purchase order, they certify that they are complying with their Retail Business Agreement. This agreement, distributed by letter to all suppliers in 1997, states that by supplying merchandise to AAFES, the supplier guarantees that it—along with its subcontractors—has complied with all labor laws and regulations governing the manufacture, sale, packing, shipment, and delivery of merchandise in the countries where the factories are located. According to AAFES officials, an AAFES contracting officer and a representative of the supplier are to sign the agreement. We reviewed the contracting arrangements between AAFES and nine of its suppliers of private label merchandise. Only four of the nine suppliers had signed the AAFES Business Agreement. AAFES management officials also told us that they rely on the reputation of their suppliers for assurance that overseas factories are in compliance with its business agreements. For example, these officials told us that they use only the overseas suppliers that have existing business relationships with other major U.S. retailers. The officials also stated that since many of these private retailers have developed and are using their own program to address working conditions in their overseas factories, the use of the same suppliers provided some degree of confidence that the suppliers are working within the laws of the host nation. However, some retailers we contacted said their programs addressed factory conditions only for the period that the factories were manufacturing the retailer’s merchandise and that the factories did not have to follow their program when they were manufacturing merchandise for another company. AAFES management officials also told us that they rely on the U.S. Customs Service to catch imported products that are manufactured under abusive working conditions. However, the Customs officials we interviewed told us that their agency encourages companies to be aware of the working conditions in supplier factories to further reduce their risk of becoming engaged in an import transaction involving merchandise produced with forced or indentured child labor. According to the Customs’ officials, the military exchanges—like retailers—are responsible for assuring that their merchandise is not produced with child or forced labor. A single industry standard for adequate working conditions does not exist, and the retailers we contacted did not believe that such a standard was practical because each company must address different needs, depending on the size of its operations, the various locations where its merchandise is produced, and the labor laws that apply in different countries. However, each of the retailers that we contacted had taken three key steps that could serve as a framework for the exchanges in promoting compliance with local labor laws and regulations in overseas factories. They involve (1) developing codes of conduct for overseas suppliers; (2) implementing their codes of conduct by disseminating expectations to their purchasing staff, suppliers, and factory employees; and (3) monitoring to better ensure compliance. The three steps taken by the retailers vary in scope and rigor, and they are evolving. We did not independently evaluate the effectiveness of these retailers’ efforts, but the retailers’ representatives told us that although situations could occur in which their codes of conduct are not followed, they believed that these steps provided an important framework for ensuring due diligence and helped to better assure fair and safe working conditions. The government agencies we visited and the International Labor Organization also recognized these three steps as key program elements and expressed a willingness to assist the exchanges in shaping a program to assure that child or forced labor was not used to produce their private label merchandise. Representatives of the 10 retailers we contacted believed that the three steps they have taken—developing codes of conduct for overseas suppliers; implementing their codes of conduct by disseminating expectations to their purchasing staff, suppliers, and factory employees; and monitoring to better ensure compliance—provide due diligence as well as a mechanism to address and improve working conditions in overseas factories. For example, officials at Levi Strauss & Co. told us that after they refused to do business with a prospective supplier in India because the supplier’s factory had wage violations and health and safety conditions that did not meet Levi Strauss & Co.’s guidelines, the supplier made improvements and requested a reassessment 4 months later. According to Levi Strauss & Co., the reassessment showed that the supplier had corrected wage violations and met health and safety standards. In addition, employee morale had also improved, as indicated by lower turnover, improved product quality, and higher efficiency at the factory. In 1991, Levi Strauss & Co. became the first multinational company to establish a code of conduct to convey its policies on working conditions in supplier factories, and subsequently such codes were widely adopted by retailers. According to the Department of Labor, U.S. companies have adopted codes of conduct for a variety of reasons, ranging from a sense of social responsibility to pressure from competitors, labor unions, the media, consumer groups, shareholders, and worker rights advocates. In addition, allegations that a company’s operations exploit children or violate other labor standards put sales—which depend heavily on brand image and consumer goodwill—at risk and could nullify the hundreds of millions of dollars a company spends on advertising. According to Business for Social Responsibility, a nongovernmental organization that provides assistance for companies developing and implementing corporate codes of conduct, adopting and enforcing a code of conduct can be beneficial for retailers because it can strengthen legal compliance in foreign countries, enhance corporate reputation/brand image, reduce the risk of negative publicity, increase quality and productivity, and improve business relationships. “when notified by the U.S. Department of Labor or any state or foreign government, or after determining upon its own inspection that a supplier or its subcontractor has committed a serious violation of law relating to child or forced labor or unsafe working conditions, Federated will immediately suspend all shipments of merchandise from that factory and will discontinue further business with the supplier.” An official from Federated Department Stores, Inc., said that the company would demand that the supplier factory institute the monitoring programs necessary to ensure compliance with its code of conduct prior to the resumption of any business dealings with that supplier. A variety of monitoring organizations, colleges, universities, and nongovernmental organizations have codes of conduct, and codes of conduct have now been widely adopted by the private sector. The International Labor Organization’s Business and Social Initiatives Database includes codes of conduct for about 600 companies. While the military exchanges’ core values oppose the use of child or forced labor to manufacture their merchandise, the military exchanges do not have codes of conduct articulating their views. Examples of Internet Web sites with codes of conduct are included in appendix III. Although retailers in the private sector implement their codes of conduct in various ways, officials of the retailers we contacted told us that they generally train their buying agents and quality assurance employees on their codes of conduct to ensure that staff at all stages in the purchasing process are aware of their company’s code. For example, an official at Levi Strauss & Co. stated that his company continually educates its employees, including merchandisers, contract managers, general managers in source countries, and other personnel at every level of the organization during the year. Officials of the retailers we contacted told us they also have distributed copies of their codes of conduct to their domestic and international suppliers and provided them with training on how to comply with the code. In addition, some retailers required suppliers to post codes of conduct and other sources of labor information in their factories in the workers’ native language. For example, The Walt Disney Company has translated its code of conduct into 50 different languages and requires each of its suppliers to post the codes in factories in the appropriate local language. Retailers such as Liz Claiborne, Inc., and Levi Strauss & Co. also work with local human rights organizations to make sure that workers understand and are familiar with their codes of conduct. Some retailers dedicate staff solely to implementing a code of conduct, while other retailers assign these duties to various departments—such as compliance, quality assurance, legal affairs, purchasing agents, and government affairs—as a collateral responsibility. Executives and officials from the retailers we contacted stated that the successful implementation of a code of conduct requires the involvement of departments throughout the supply chain, both internally and externally (including supplier and subcontractor factories). They also stated that the involvement of senior executives is critical because they provide an institutional emphasis that helps to ensure that the code of conduct is integrated throughout the various internal departments of the company. To help ensure that suppliers’ factories are in compliance with their codes of conduct, the retailers we contacted have used a variety of monitoring efforts. Retailer officials told us that the extent of monitoring varies and can involve internal monitoring, in which the company uses its own employees to inspect the factories; external monitoring, in which the company contracts with an outside firm or organization to inspect the factories; or a combination of both. The various forms of monitoring involve the visual inspection of factories for health and safety violations; interviewing management to understand workplace policies; reviewing wage, hour, age, and other records for completeness and accuracy; and interviewing workers to verify workplace policies and practices. The 10 retail companies we contacted did not provide a precise cost for their internal and external monitoring programs, but a representative of Business for Social Responsibility estimated that monitoring costs ranged from $250,000 to $15 million a year. Some retailers suggested that the military exchanges could minimize costs by joining together to conduct monitoring, particularly in situations where the exchanges are purchasing merchandise manufactured at the same factories. Companies that rely on internal monitoring use their own staff to monitor the extent to which supplier factories adhere to company policies and standards. According to an official with the National Retail Federation, the world’s largest retail trade association, retailers generally prefer internal monitoring because it provides them with first-hand knowledge of their overseas facilities. At the same time, representatives of the nongovernmental organizations we visited expressed their opinion that inspections performed by internal staff may not be perceived as sufficiently independent. According to information we obtained from the retailers we contacted, nearly all of them had an internal monitoring program to inspect all or some supplier factories; their internal monitoring staff ranged from 5 to 100 auditors located in domestic and international offices. Some retailers said they perform prescreening audits before entering into a contractual agreement, followed by announced and unannounced inspections at a later time. The frequency of audits performed at supplier factories depends on various factors, such as the rigor and size of the corporation’s monitoring plan, the location of supplier factories, and complaints from workers or nongovernmental organizations. Some retailers—along with colleges, universities, and factories—are also using external monitoring organizations that provide specially trained auditors to verify compliance with workplace codes of conduct. We visited four of these monitoring organizations—Fair Labor Association, Social Accountability International, Worker Rights Consortium, and Worldwide Responsible Apparel Production. More information on these monitoring organizations appears in appendix II. Each organization has different guidelines for its monitoring program, but typically, a program involves (1) a code of conduct that all participating corporations must implement and (2) the inspection of workplaces at supplier factories participating in the program by audit firms accredited by the organization. External monitoring organizations’ activities differ in scope. For example, under the Fair Labor Association’s program, companies use external monitors accredited by the Fair Labor Association for periodic inspections of factories. In contrast, in the Worldwide Responsible Apparel Production’s program, individual factories are certified as complying with their program. Although differences in scope exist—and have led to debate on the best approach for a company—corporations that are adopting external monitors believe they are valuable for providing an independent assessment of factory working conditions. Some retailers we contacted offered to share their experiences in developing programs to address working conditions in overseas factories. The Departments of Labor and State, the U.S. Customs Service, and the International Labor Organization prepare reports that address working conditions in overseas factories. These organizations expressed a willingness to assist the military exchanges in shaping a program to assure that child or forced labor does not produce private label exchange merchandise. Furthermore, the International Labor Organization offered to provide advisory services, technical assistance, and training programs to help the military exchanges define and implement best labor practices throughout their supply chain. The military exchanges lag behind leading retailers in the practices they employ to assure that working conditions are not abusive in overseas factories that manufacture their private label merchandise. As a result, the exchanges do not know if workers in these factories are treated humanely and compensated fairly. The exchanges recently became more interested in developing a program to obtain information on worker rights and working conditions in overseas supplier plants, and the House Armed Services Committee Report for the Fiscal Year 2002 National Defense Authorization Act requires them to do so. However, developing a program that is understood throughout the supply chain, lives up to expectations over time, and is cost-effective will be a challenge. Leading retailers have been addressing these challenges for as long as 10 years and have taken three key steps to promote adequate working conditions and compliance with labor laws and regulations in overseas factories—developing codes of conduct, implementing the codes of conduct by the clear dissemination of expectations, and monitoring to ensure that suppliers’ factories comply with their codes of conduct. Drawing on information and guidance from various U.S. government agencies and the International Labor Organization can facilitate the military exchanges’ development of such a program. Information available from these entities could be useful not only in establishing an initial program but also in implementing it over time, and the costs may be minimized by having the military exchanges pursue these efforts jointly. As the Secretary of Defense moves to implement the congressionally directed program to assure that private label exchange merchandise is not produced by child or forced labor, we recommend the Under Secretary of Defense (Personnel and Readiness), in conjunction with the Assistant Secretary of Defense (Force Management Policy), require the Army and Air Force Exchange Service, Naval Exchange Service Command, and Marine Corp Community Services to develop their program around the framework outlined in this report, including creating a code of conduct that reflects the values and expectations that the exchanges have of their suppliers; developing an implementation plan for the code of conduct that includes steps to communicate the elements of the code to internal staff, business partners, and factory workers and to train them on these elements; developing a monitoring effort to ensure that the codes of conduct are using government agencies, such as the Departments of State and Labor, retailers, and the International Labor Organization as resources for information and insights that would facilitate structuring their program; establishing ongoing communications with these organizations to help the exchanges stay abreast of information that would facilitate their implementation and monitoring efforts to assure that exchange merchandise is not produced by child or forced labor; and pursuing these efforts jointly where there are opportunities to minimize costs. In commenting on a draft of this report, the Assistant Secretary of Defense (Force Management Policy) concurred with its conclusions and recommendations. The Assistant Secretary identified planned implementing actions for each recommendation and, where action had not already begun, established July 1, 2002, as the date for those actions to be effective. The Department’s written comments are presented in their entirety in appendix IV. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretary of the Army; the Secretary of the Navy; the Secretary of the Air Force; the Commander, Army and Air Force Exchange Service; the Commander, Navy Exchange Service Command; the Commander, Marine Corps Community Services; the Director, Office of Management and Budget; and interested congressional committees and members. We will also make copies available to others upon request. Please contact me at (202) 512-8412 if you or your staff has any questions concerning this report. Major contributors to this report are listed in appendix V. To compare military exchanges with the private sector in terms of the methods used to identify working conditions at the overseas factories, we limited our work to the exchanges’ efforts related to private label suppliers and performed work at the military exchanges and leading retail companies. To determine the actions of the exchanges to identify working conditions in the factories of their overseas suppliers, we reviewed the policies and procedures governing the contract files, purchase orders, and contractual agreements at the exchanges’ headquarters offices and interviewed officials responsible for purchasing merchandise sold by the exchanges. For example, we reviewed the contracting arrangements between the Army and Air Force Exchange Service (AAFES) and nine of its suppliers of private label merchandise to determine if AAFES had requested information on working conditions in overseas factories and whether the suppliers had signed the contractual documents. For historical perspective, we reviewed the results of prior studies and audit reports of the military exchanges. We met with officials and performed work at the headquarters of AAFES in Dallas, Texas; the Navy Exchange Service Command (Navy Exchange) in Virginia Beach, Virginia; and the Marine Corps Community Services (Marine Corps Exchange) in Quantico, Virginia. To determine the actions of the private sector to identify working conditions in the factories of their overseas suppliers, we analyzed 10 leading private sector companies’ efforts to identify working conditions in overseas factories by interviewing the companies’ officials and the documentation they provided. We chose seven of the companies from the National Retail Federation’s list of the 2001 Top 100 Retailers (in terms of sales) in the United States. The retailers and their ranking on the Federation’s list follow: Federated Department Stores, Inc. (15); JCPenney (8); Kohl’s (36); Kmart (5); The Neiman Marcus Group, Inc. (64); Sears, Roebuck and Co. (4); and Wal-Mart (1). The remaining three companies— The Walt Disney Company, Levi Strauss & Co., and Liz Claiborne, Inc.— were chosen on the basis of recommendations from U.S. government agencies, nongovernmental organizations, and industry associations as being among the leaders in efforts to address working conditions in overseas factories. These three companies generally refer to themselves as “manufacturers” or “licensing” organizations, but they also operate retail stores. We interviewed officials and reviewed documents from the Departments of State and Labor, the Office of the United States Trade Representative, and the International Labor Organization to gain a perspective on government and industry efforts to address factory working conditions. We also interviewed officials from industry associations and labor and human rights groups. To identify steps the private sector has taken to promote adequate working conditions at factories that could serve as a framework for the exchanges, we focused on the efforts of the 10 retailers. We documented the programs and program elements (e.g., codes of conduct, plans for implementing codes of conduct throughout the supply chain, and monitoring efforts) used by the 10 retailers that we contacted. We did not independently evaluate the private sector programs to determine the effectiveness of their efforts or to independently verify specific allegations of worker rights abuses. Rather, we relied primarily on discussions with retailers’ officials and the documentation they provided. We met with officials from government agencies and reviewed independent studies such as State and Labor Department and International Labor Organization reports, providing a perspective on government and industrywide efforts to address working conditions in overseas factories. We documented the procedures the exchanges used to purchase merchandise and interviewed headquarters personnel responsible for buying and inspecting merchandise made overseas. We also reviewed the exchanges’ policies, statements of core values, and oversight programs. To gain a perspective on the various approaches to address worker rights issues, we interviewed nongovernmental organizations and industry associations, including representatives from the National Labor Committee, National Consumers League, International Labor Rights Fund, Global Exchange, Investor Responsibility Research Center, Business for Social Responsibility, National Retail Federation, and the American Apparel and Footwear Association. In addition, we interviewed officials from four monitoring organizations—the Fair Labor Association; Social Accountability International; Worldwide Responsible Apparel Production; and Worker Rights Consortium—which inspect factories for compliance with codes of conduct governing labor practices and human rights. To collect information on government enforcement actions and funding for programs to address working conditions in overseas factories, we interviewed officials from the Department of State’s Office of International Labor Affairs, the Department of Labor’s Bureau of International Labor Affairs, the U.S. Customs Service’s Fraud Investigations Office, and the Office of the United States Trade Representative. For an international perspective on worldwide efforts, we visited the International Labor Organization’s offices in Washington, D.C., and Geneva, Switzerland. We performed our review from April through November 2001 in accordance with generally accepted government auditing standards. The Customs Service’s Fraud Investigations Office and its 29 attaché offices in 21 countries investigate cases concerning prison, forced, or indentured labor. The Customs officials work with the Department of State, Department of Commerce, and nongovernmental organizations to collect leads for investigations. In some cases, corporations have told Customs about suspicions they have about one of their suppliers and recommended an investigation. In addition, private citizens can report leads they may have concerning a factory. The Forced Child Labor Center was established as a clearinghouse for investigative leads, a liaison for Customs field offices, and a process to improve enforcement coordination and information. Customs also provides a toll-free hotline in the United States (1-800-BE-ALERT) to collect investigative leads on forced labor abuses. Outreach efforts from the Customs Service involve providing seminars around the world for U.S. government agencies, foreign governments, nongovernmental organizations, and corporations concerning forced and indentured labor issues. In December 2000, Customs published a manual entitled Forced Child Labor Advisory, which provides importers, manufacturers, and corporations with information designed to reduce their risk of becoming engaged in a transaction involving imported merchandise produced with forced or indentured child labor. Customs also publishes on its Internet Web site a complete list of outstanding detention orders and findings concerning companies that are suspected of producing merchandise from forced or indentured labor. Customs can issue a detention order if available information reasonably, but not necessarily conclusively, indicates that imported merchandise has been produced with forced or indentured labor; the order may apply to an individual shipment or to the entire output of a type of product from a given firm or facility. If, after an investigation, Customs finds probable cause that a class of merchandise is a product of forced or indentured child labor, it can bar all imports of that product from that firm from entering the United States. On June 5, 1998, the Department of the Treasury’s Advisory Committee on International Child Labor was established to provide the Treasury Department and the U.S. Customs Service with recommendations to strengthen the enforcement of laws against forced or indentured child labor, in particular, through voluntary compliance and business outreach. The Advisory Committee was established to support law enforcement initiatives to stop illegal shipments of products made through forced or indentured child labor and to punish violators. The Committee comprises industry representatives and child labor experts from human rights and labor organizations. Customs Service officials told us they have met with leading retailers to provide feedback on their internal monitoring programs to assure that their merchandise is not produced with forced child labor. Customs Service officials expressed a willingness to assist the exchanges in shaping a program to assure that child or forced labor does not produce private label exchange merchandise. The Department of Labor conducts targeted enforcement sweeps in major garment centers in the United States, but it does not have the authority to inspect foreign factories. In August, 1996, the Department of Labor called upon representatives of the apparel industry, labor unions, and nongovernmental organizations to join together as the Apparel Industry Partnership (later becoming the Fair Labor Association) to develop a plan that would assure consumers that apparel imports into the United States are not produced under abusive labor conditions. The Bureau of International Labor Affairs, Department of Labor, has produced seven annual congressionally requested reports on child labor, entitled By the Sweat and Toil of Children, concerning the use of forced labor, codes of conduct, consumer labels, efforts to eliminate child labor, and the economic considerations of child labor. Other relevant reports on worker rights produced by the Bureau include the 2000 Report on Labor Practices in Burma and Symposium on Codes of Conduct and International Labor Standards. Since 1995, the Department of Labor has also contributed $113 million to international child labor activities, including the International Labor Organization’s International Program for the Elimination of Child Labor. In addition, the Department of Labor provided the International Labor Organization with $40 million for both fiscal years 2000 and 2001 for programs in various countries concerning forced labor, freedom of association, collective bargaining, women’s rights, and industrial relations in lesser-developed nations. The Department also provides any company that would like to learn how to implement an effective monitoring program with technical assistance, and Labor officials have expressed a willingness to assist the exchanges in shaping a program to assure that private label exchange merchandise is not produced by child or forced labor. On January 16, 2001, the Department of State’s Anti-Sweatshop Initiative awarded $3.9 million in grants to support efforts to eliminate abusive working conditions and protect the health, safety, and rights of workers overseas. The Anti-Sweatshop Initiative is designed to support innovative strategies to combat sweatshop conditions in overseas factories that produce goods for the U.S. market. Five nongovernmental and international organizations, such as the Fair Labor Association, International Labor Rights Fund, Social Accountability International, American Center for International Solidarity, and the International Labor Organization, received over $3 million. In addition, the U.S. Agency for International Development will administer an additional $600,000 for smaller grants in support of promising strategies to eliminate abusive labor conditions worldwide. The Department of State’s Bureau of Democracy, Human Rights, and Labor publishes Country Reports on Human Rights Practices, a legislatively mandated annual report to Congress concerning worker rights issues, including child labor and freedom of association in 194 countries. Retailers and manufacturers stated they have utilized these reports to stay abreast of human and labor rights issues in a particular country and to make factory selections. The Department of State has expressed a willingness to assist the exchanges in shaping a program to assure that child or forced labor does not produce private label exchange merchandise. The Office of the U.S. Trade Representative leads an interagency working group—the Trade Policy Staff Committee—which has the right to initiate worker rights petition cases under the Generalized System of Preferences. The Generalized System of Preferences Program establishes trade preferences to provide duty-free access to the United States for designated products from eligible developing countries worldwide to promote development through trade rather than traditional aid programs. A fundamental criterion for the Generalized System of Preferences is that the beneficiary country has or is taking steps to afford workers’ internationally recognized worker rights, including the right to association; the right to organize and bargain collectively; a prohibition against compulsory labor; a minimum age for the employment of children; and regulations governing minimum wages, hours of work, and occupational safety and health. Under the Generalized System of Preferences, any interested party may petition the committee to review the eligibility status of any country designated for benefits. If a country is selected for review, the committee then conducts its own investigation of labor conditions and decides whether or not the country will continue to receive Generalized System of Preferences benefits. Interested parties may also submit testimony during the review process. In addition, U.S. Trade Representatives can express their concern about worker rights issues in a country to foreign government officials, which may place pressure on supplier factories to resolve labor conditions. (The general authority for duty-free treatment expired on September 30, 2001 . Proposed legislation provides for an extension with retroactive application similar to previous extensions of this authority. Authority for sub-Saharan African countries continues through September 30, 2008 [19 U.S.C. 2466b]). The International Labor Organization is a United Nations specialized agency that seeks to promote social justice and internationally recognized human and labor rights. It has information on codes of conduct, research programs, and technical assistance to help companies address human rights and labor issues. Currently, the International Labor Organization is developing training materials to provide mid-level managers with practical guidance on how to promote each of its four fundamental labor principles both internally and throughout a company’s supply chain. The following are the four fundamental principles: (1) freedom of association and the effective recognition of the right to collective bargaining, (2) the elimination of all forms of forced or compulsory labor, (3) the effective abolition of child labor, and (4) the elimination of discrimination in employment. These principles are contained in the International Labor Organization’s Declaration on Fundamental Principles and Rights at Work and were adopted by the International Labor Conference in 1998. To promote the principles, the U.S. Department of Labor is funding various projects pertaining to improving working conditions in the garment and textile industry and is addressing issues of freedom of association, collective bargaining, and forced labor in the following regions or countries: Bangladesh, Brazil, Cambodia, the Caribbean, Central America, Colombia, East Africa, East Timor, Kenya, India, Indonesia, Jordan, Morocco, Nigeria, Nepal, Vietnam, southern Africa, and Ukraine. For fiscal years 2000 and 2001, these projects received about $40 million in funding. On January 16, 2001, the International Labor Organization was awarded $496,974 by the Department of State Anti-Sweatshop Initiative to research how multinational corporations ensure compliance with their labor principles. Another research project seeks to demonstrate the link between international labor standards and good business performance. A major product of the research will be a publication for company managers that looks at the relationship between International Labor Organization conventions and company competitiveness and that then examines how adhering to specific standards (i.e., health and safety, human resource development, and workplace consultations) can improve corporate performance. The International Labor Organization has also created the Business and Social Initiatives Database, which includes extensive information on corporate policies and reports, codes of conduct, accreditation and certification criteria, and labeling programs on its Web site. For example, the database contains an estimated 600 codes of conduct from corporations, nongovernmental organizations, and international organizations. From fiscal year 1995 through fiscal year 2001, the Congress has appropriated over $113 million for the Department of Labor for international child labor activities including the International Labor Organization’s International Program on the Elimination of Child Labor. The program has estimated that the United States will pledge $60 million for the 2002-3 period. The United States is the single largest contributor to the International Program on the Elimination of Child Labor, which has focused on the following four objectives: Eliminating child labor in specific hazardous and/or abusive occupations. These targeted projects aim to remove children from work, provide them with educational opportunities, and generate alternative sources of income for their families. Bringing more countries that are committed to addressing their child labor problem into the program. Documenting the extent and nature of child labor. Raising public awareness and understanding of international child labor issues. The program has built a network of key partners in 75 member countries (including government agencies, nongovernmental organizations, media, religious institutions, schools, and community leaders) in order to facilitate policy reform and change social attitudes, so as to lead to the sustainable prevention and abolition of child labor. During fiscal years 2000-2003, the United States is funding programs addressing child labor in the following countries or regions: Bangladesh, Brazil, Cambodia, Colombia, Costa Rica, the Dominican Republic, El Salvador, Ghana, Guatemala, Haiti, Honduras, India, Jamaica, Malawi, Mongolia, Nepal, Nicaragua, Nigeria, Pakistan, the Philippines, Romania, South Africa, Tanzania, Thailand, Uganda, Ukraine, Vietnam, Yemen, and Zambia and Africa, Asia, Central America, Inter-America, and South America. Business for Social Responsibility, headquartered in San Francisco, California, is a membership organization for companies, including retailers, seeking to sustain their commercial success in ways that demonstrate respect for ethical values, people, communities, and the environment. (Its sister organization, the Business for Social Responsibility Education Fund, is a nonprofit charitable organization serving the broader business community and the general public through research and educational programs.) In 1995, this organization created the Business and Human Rights Program to address the range of human rights issues that its members face in using factories located in developing countries. The Business and Human Rights Program provides a number of services; for example, it offers (1) counsel and information to companies developing corporate human rights policies, including codes of conduct and factory selection guidelines for suppliers; (2) information services on human rights issues directly affecting global business operations, including country-specific and issue-specific materials; (3) a means of monitoring compliance with corporate codes of conduct and local legal requirements, including independent monitoring; (4) a mechanism for groups of companies, including trade associations, to develop collaborative solutions to human rights issues; and (5) the facilitation of dialogue between the business community and other sectors, including the government, media, and human rights organizations. The Fair Labor Association, a nonprofit organization located in Washington, D.C., offers a program that incorporates both internal and external monitoring. In general, the Association accredits independent monitors, certifies that companies are in compliance with its code of conduct, and serves as a source of information for the public. Companies affiliated with the Association implement an internal monitoring program consistent with the Fair Labor Association’s Principles of Monitoring, covering at least one-half of all their applicable facilities during the first year of their participation, and covering all of their facilities during the second year. In addition, participating companies commit to using independent external monitors accredited by the Fair Labor Association to conduct periodic inspections of at least 30 percent of the company’s applicable facilities during its initial 2- to 3-year participation period. On January 16, 2001, the Fair Labor Association was awarded $750,000 by the Department of State’s Anti-Sweatshop Initiative to enable the organization to recruit, accredit, and maintain a diverse roster of external monitors around the world. The Fair Labor Association’s participating companies include the following: Adidas-Saloman A.G.; Nike, Inc.; Reebok International Ltd.; Levi Strauss & Co.; Liz Claiborne, Inc.; Patagonia; GEAR for Sports; Eddie Bauer; Josten’s Inc.; Joy Athletic; Charles River Apparel; Phillips-Van Heusen Corporation; and Polo Ralph Lauren Corporation. Global Exchange, headquartered in San Francisco, California, is a nonprofit research, education, and action center dedicated to increasing global awareness among the U.S. public while building international partnerships around the world. Global Exchange has filed and supported class-action lawsuits against 26 retailers and manufacturers concerning alleged sweatshop abuse in Saipan’s apparel factories. As of September 2001, 19 of those corporations had settled for $8.75 million and have agreed to adopt a code of conduct and a monitoring program in Saipanese factories that produce their merchandise. The International Labor Rights Fund is a nonprofit action and advocacy organization located in Washington, D.C. It pursues legal and administrative actions on behalf of working people, creates innovative programs and enforcement mechanisms to protect workers’ rights, and advocates for better protections for workers through its publications; testimony before national and international hearings; and speeches to academic, religious, and human rights groups. The Fund is currently participating in various lawsuits against multinational corporations involving labor rights in Burma, Colombia, Guatemala, and Indonesia. In 1996, the International Labor Rights Fund and Business for Social Responsibility were key facilitators in establishing a monitoring program for a Liz Claiborne, Inc., supplier factory in Guatemala. The Guatemalan nongovernmental monitoring organization, Coverco, was founded from this process and has since published two public reports on the results of its meetings with factory management and factory workers. Officials at Liz Claiborne, Inc., stated that the monitoring initiative has been very effective in detecting and correcting problems and helpful in offering ideas for best practices and has provided enhanced credibility for the company’s monitoring efforts. In 2001, the International Labor Rights Fund was awarded an Anti- Sweatshop Initiative grant from the Department of State in the amount of $152,880. The Fund plans to undertake a project to work with labor rights organizations in Africa, Asia, and Latin America to build a global campaign for national and international protections for female workers. The Fund will conduct worker surveys and interviews in Africa and the Caribbean to determine the extent of the problem. In addition, the Fund and its nongovernmental organization partners will develop an educational video to help alert women workers in these countries about the problem of sexual harassment. The Investor Responsibility Research Center, located in Washington, D.C., is a research and consulting organization that performs independent research on corporate governance and corporate responsibility issues. The Center contributed to the University Initiative Final Report, which collected information on working conditions in university-licensed apparel factories in China, El Salvador, Mexico, Pakistan, South Korea, Thailand, and the United States. The report addresses steps the universities can implement to address poor labor conditions in licensee factories and ongoing efforts by government and nongovernmental organizations to improve working conditions in the apparel industry. The report is based on factory visits and interviews with nongovernmental organizations, labor union officials, licensees, factory owners and managers, and government officials. The National Consumers League is a nonprofit organization located in Washington, D.C. Its mission is to identify, protect, represent, and advance the economic and social interests of consumers and workers. Created in 1899, the National Consumers League is the nation’s oldest consumer organization. The League worked for the national minimum wage provisions in the Fair Labor Standards Act (passed in 1938) and has helped organize the Child Labor Coalition, which is committed to ending child labor exploitation in the United States and abroad. The Child Labor Coalition comprises more than 60 organizations representing educators, health groups, religious and women’s groups, human rights groups, consumer groups, labor unions, and child labor advocates. The Coalition works to end child labor exploitation in the United States and abroad and to protect the health, education, and safety of working minors. National Labor Committee The National Labor Committee is a nonprofit human rights organization located in New York City. Its mission is to educate and actively engage the U.S. public on human and labor rights abuses by corporations. Through education and activism, the committee aims to end labor and human rights violations. The committee has led “Corporate Accountability Campaigns” against major retailers and manufactures to improve factory conditions. In El Salvador, the National Labor Committee has facilitated an independent monitoring program between (1) The GAP, the retailer; (2) Jesuit University in San Salvador, the human rights office of the Catholic Archdiocese; and (3) the Center for Labor Studies, a nongovernmental organization. The committee advocates that corporations should disclose supplier factory locations and hire local religious or human rights organizations to conduct inspections in factories. Social Accountability International, founded in 1997, is located in New York City, New York. It is a nonprofit monitoring organization dedicated to the development, implementation, and oversight of voluntary social accountability standards in factories around the world. In response to the inconsistencies among workplace codes of conduct, Social Accountability International developed a standard, named the Social Accountability 8000 standard, for workplace conditions and a system for independently verifying compliance of factories. The Social Accountability 8000 standard promotes human rights in the workplace and is based on internationally accepted United Nations and International Labor Organization conventions. Social Accountability 8000 requires individual facilities to be certified by independent, accredited certification firms with regular follow-up audits. As of November 2001, 82 Social Accountability 8000 certified factories were located in 21 countries throughout Asia, Europe, North America, and South America. U.S. and international companies adopting the Social Accountability 8000 standard are Avon, Cutter & Buck, Eileen Fisher, and Toys R Us. In 2001, Social Accountability International was awarded an Anti-Sweatshop Initiative grant from the Department of State of $1 million for improving social auditing through research and collaboration; capacity building; and consultation with trade unions, nongovernmental organizations, and small and medium-sized enterprises; and consumer education. These projects will take place in several countries, including Brazil, China, Poland, and Thailand, and consumer education will be focused on the United States. Worker Rights Consortium Worker Rights Consortium, a nonprofit monitoring organization located in Washington, D.C., provides a factory-based certification program for university licensees. University students, administrators, and labor rights activists created Worker Rights Consortium to assist in the enforcement of manufacturing codes of conduct adopted by colleges and universities; these codes are designed to ensure that factories producing goods bearing college and university logos respect the basic rights of workers. The Worker Rights Consortium investigates factory conditions and reports its findings to universities and the public. Where violations are uncovered, the Consortium works with colleges and universities, U.S.-based retail corporations, and local worker organizations to correct the problem and improve conditions. It is also working to develop a mechanism to ensure that workers producing college logo goods can bring complaints about code of conduct violations, safely and confidentially, to the attention of local nongovernmental organizations and the Worker Rights Consortium. As of November 2001, 92 colleges and universities had affiliated with the Worker Rights Consortium, adopting and implementing a code of conduct in contracts with licensees. The Worldwide Responsible Apparel Production, a nonprofit monitoring organization located in Washington, D.C., monitors and certifies compliance with socially responsible standards for manufacturing and ensures that sewn products are produced under lawful, humane, and ethical conditions. The basis for creating the monitoring and certification program came from apparel producers that requested that the American Apparel & Footwear Association address inconsistent company standards and repetitive monitoring. The program is a factory certification program that requires a factory to perform a self-assessment followed by an evaluation by a monitor from the Worldwide Responsible Apparel Production Certification Program. On the basis of this evaluation, the monitor will either recommend that the facility be certified or identify areas where corrective action must be taken before such a recommendation can be made. Following a satisfactory recommendation from the monitor, the Worldwide Responsible Apparel Production Certification Board will review the documentation of compliance and decide upon certification. The Certification Program was pilot tested in 2000 at apparel manufacturing facilities in Central America, Mexico, and the United States. As of November 2001, 500 factories in 47 countries had registered to become certified. The American Apparel & Footwear Association, a national trade association located in Washington, D.C., represents roughly 800 U.S. apparel, footwear, and supplier companies whose combined industries account for more than $225 billion in annual U.S. retail sales. The Association was instrumental in creating the Worldwide Responsible Apparel Production monitoring program. The Association’s Web site states that “members are committed to socially responsible business practices and to assuring that sewn products are produced under lawful, humane, and ethical conditions.” The American Apparel & Footwear Association has also created a Social Responsibility Committee, in which various manufacturers meet to discuss their programs to address worker rights issues. National Retail Federation As the world’s largest retail trade association, National Retail Federation, located in Washington, D.C., conducts programs and services in research, education, training, information technology, and government affairs to protect and advance the interests of the retail industry. The Federation’s membership includes the leading department, specialty, independent, discount, and mass merchandise stores in the United States and 50 nations around the world. It represents more than 100 state, national, and international trade organizations, which have members in most lines of retailing. The National Retail Federation also includes in its membership key suppliers of goods and services to the retail industry. The Federation has a Web site link entitled, “Stop Sweatshops,” which provides information on the retail industry’s response to sweatshops, including forms of monitoring and a brief history of U.S. sweatshops. The Federation also has an International Trade Advisory Council, comprising retail and sourcing representatives, which discusses various issues pertaining to international labor laws; international trade; and customs matters, both in the legislative and regulatory areas. The codes of conduct for the retailers we visited that have posted their codes on the Internet are at the Internet Web sites shown in table 4. In addition to those named above, Nelsie Alcoser, Jimmy Palmer, and Susan Woodward made key contributions to this report.
The military exchanges operate retail stores similar to department stores in more than 1,500 locations worldwide. The exchanges stock merchandise from many sources, including name-brand companies, brokers and importers, and overseas firms. Reports of worker rights abuses, such as child labor and forced overtime, and antilabor practices have led human rights groups and the press to scrutinize working conditions in overseas factories. GAO found that the military exchanges are not as proactive as private sector companies in determining working conditions at the overseas factories that manufacture their private label merchandise. Moreover, the exchanges have not sought to verify that overseas factories comply with labor laws and regulations. A single industrywide standard for working conditions at overseas factories was not considered practical by the 10 retailers GAO contacted. However, these retailers have taken the following three steps to ensure that goods are not produced by child or forced labor: (1) developing workplace codes of conduct that reflect their expectations of suppliers; (2) disseminating information on fair and safe labor conditions and educating their employees, suppliers, and factory workers on them; and (3) using their own employees or contractors to regularly inspect factories to ensure that their codes of conduct are upheld.
You are an expert at summarizing long articles. Proceed to summarize the following text: The Health Center Program is governed by section 330 of the Public Health Service Act. By law, grantees with community health center funding must operate health center sites that serve, in whole or in part, an MUA or MUP; provide comprehensive primary care services as well as enabling services, such as translation and transportation, that facilitate access to health care; are available to all residents of the health center service area, with fees on a sliding scale based on patients’ ability to pay; are governed by a community board of which at least 51 percent of the members are patients of the health center; and meet performance and accountability requirements regarding administrative, clinical, and financial operations. HRSA may designate a geographic area—such as a group of contiguous counties, a single county, or a portion of a county—as an MUA based on the agency’s index of medical underservice, composed of a weighted sum of the area’s infant mortality rate, percentage of population below the federal poverty level, ratio of population to the number of primary care physicians, and percentage of population aged 65 and over. In previous reports, we identified problems with HRSA’s methodology for designating MUAs, including the agency’s lack of timeliness in updating its designation criteria. HRSA published a notice of proposed rule making in 1998 to revise the MUA designation system, but it was withdrawn because of a number of issues raised in over 800 public comments. In February 2008, HRSA published a revised proposal and the period for pubic comment closed in June 2008. HRSA uses a competitive process to award Health Center Program grants. There are four types of health center grants available through the Health Center Program, but only new access point grants are used to establish new health center sites. Since 2005, HRSA has evaluated applications for new access point grants using eight criteria for which an application can receive a maximum of 100 points (see table 1). Grant applications are evaluated by an objective review committee—a panel of independent experts, selected by HRSA, who have health center- related experience. The objective review committee scores the applications by awarding up to the maximum number of points allowed for each criterion and prepares summary statements that detail an application’s strengths and weaknesses in each evaluative criterion. The summary statements also contain the committee’s recommended funding amounts and advisory comments for HRSA’s internal use; for example, the committee may recommend that HRSA consider whether the applicant’s budgeted amount for physician salaries is appropriate. The committee develops a rank order list—a list of all evaluated applications in descending order by score. HRSA uses the internal comments— recommended funding amounts and advisory comments—from the summary statements and the rank order list when making final funding decisions. In addition, HRSA is required to take into account the urban/rural distribution of grants, the distribution of funds to different types of health centers, and whether a health center site is located in a sparsely populated rural area. HRSA also considers the geographic distribution of health center sites—to determine if overlap exists in the areas served by the sites—as well as the financial viability of grantees. After the funding decisions are made, HRSA officials review the summary statements for accuracy, remove the recommended funding amounts and any advisory comments, and send the summary statements to unsuccessful applicants as feedback. For fiscal year 2007, HRSA funded 60 training and TA cooperative agreements with various national, regional, and state organizations to support the Health Center Program, in part, by providing training and technical assistance to health center grant applicants. Cooperative agreements are a type of federal assistance that entails substantial involvement between the government agency—in this case, HRSA—and the funding recipient—that is, the national, regional, and state organizations. HRSA relies on these training and TA cooperative agreement recipients to identify underserved areas and populations across the country in order to assist the agency in increasing access to primary care services for underserved people. In addition, these cooperative agreement recipients serve as HRSA’s primary form of outreach to potential applicants for health center grants. For each cooperative agreement recipient, HRSA assigns a project officer who serves as a recipient’s main point of contact with the agency. The duration of a cooperative agreement, known as the project period, is generally 2 or 3 years, with each year known as a budget period. As a condition of the cooperative agreements, HRSA project officers and the organizations jointly develop work plans detailing the specific training and technical assistance activities to be conducted during each budget period. Activities targeted to new access point applicants can include assistance with assessing community needs, disseminating information in underserved communities regarding health center program requirements, and developing and writing grant applications. After cooperative agreement recipients secure funding through a competitive process, they reapply for annual funding through what is known as a noncompeting continuation application each budget period until the end of their project period. These continuation applications typically include a work plan and budget for the upcoming budget period and progress report on the organization’s current activities. HRSA policy states that cooperative agreement recipients will undergo a comprehensive on-site review by agency officials once every 3 to 5 years. During these comprehensive on-site reviews, HRSA evaluates the cooperative agreement recipients using selected performance measures— developed in collaboration with the organizations—and requires recipients to develop action plans to improve operations if necessary. The purpose of these reviews is for the agency to evaluate the overall operations of all its funding recipients and improve the performance of its programs. Almost half of MUAs nationwide lacked a health center site in 2006. The percentage of MUAs that lacked a health center site varied widely across census regions and states. We could not determine the types of primary care services provided by health center sites in MUAs because HRSA does not maintain data on the types of services offered at each site. Because of this, the extent to which individuals in MUAs have access to the full range of comprehensive primary care services provided by health center sites is unknown. Based on our analysis of HRSA data, we found that 47 percent of MUAs nationwide—1,600 of 3,421—lacked a health center site in 2006. We found wide variation among census regions—Northeast, Midwest, South, and West—and across states in the percentage of MUAs that lacked health center sites. (See fig. 1.) The Midwest census region had the most MUAs that lacked a health center site (62 percent) while the West census region had the fewest MUAs that lacked a health center site (32 percent). More than three-quarters of the MUAs in 4 states—Nebraska (91 percent), Iowa (82 percent), Minnesota (77 percent), and Montana (77 percent)— lacked a health center site; in contrast, fewer than one-quarter of the MUAs in 13 states—including Colorado (21 percent), California (20 percent), Mississippi (20 percent), and West Virginia (19 percent)— lacked a health center site. (See app. I for more detail on the percentage of MUAs in each state and the U.S. territories that lacked a health center site in 2006.) In 2006, among all MUAs, 32 percent contained more than one health center site; among MUAs with at least one health center site, 60 percent contained multiple health center sites. Almost half of all MUAs in the West census region contained more than one health center site while less than one-quarter of MUAs in the Midwest contained multiple health center sites. The states with three-quarters or more of their MUAs containing more than one health center site were Alaska, Connecticut, the District of Columbia, Hawaii, New Hampshire, and Rhode Island. In contrast, Nebraska, Iowa, and North Dakota were the states where less than 10 percent of MUAs contained multiple sites. We could not determine the types of primary care services provided at each health center site because HRSA does not collect and maintain readily available data on the types of services provided at individual health center sites. While HRSA requests information from applicants in their grant applications on the services each site provides, in order for HRSA to access and analyze individual health center site information on the services provided, HRSA would have to retrieve this information from the grant applications manually. HRSA separately collects data through the UDS from each grantee on the types of services it provides across all of its health center sites, but it does not collect data on services provided at each site. Although each grantee with community health center funding is required to provide the full range of comprehensive primary care services, it is not required to provide all services at each health center site it operates. HRSA officials told us that some sites provide limited services— such as dental or mental health services. Because HRSA lacks readily available data on the types of services provided at individual sites, it cannot determine the extent to which individuals in MUAs have access to the full range of comprehensive primary care services provided by health center sites. This lack of basic information can limit HRSA’s ability to assess the full range of primary care services available in needy areas when considering the placement of new access points and limit the agency’s ability to evaluate service area overlap in MUAs. Our analysis of new access point grants awarded in 2007 found that these awards reduced the number of MUAs that lacked a health center site by about 7 percent. Specifically, 113 fewer MUAs in 2007—or 1,487 MUAs in all—lacked a health center site when compared with the 1,600 MUAs that lacked a health center site in 2006. As a result, 43 percent of MUAs nationwide lacked a health center site in 2007. Despite the overall reduction in the percentage of MUAs nationwide that lacked health center sites in 2007, regional variation remained. The West and Midwest census regions continued to show the lowest and highest percentages of MUAs that lacked health center sites, respectively. (See fig. 2.) Three of the census regions showed a 1 or 2 percentage point change since 2006, while the South census region showed a 5 percentage point change. The minimal impact of the 2007 awards on regional variation is due, in large part, to the fact that more than two-thirds of the nationwide decline in the number of MUAs that lacked a health center site—77 out of the 113 MUAs—occurred in the South census region. (See table 2.) In contrast, only 24 of the 113 MUAs were located in the Midwest census region, even though the Midwest had nearly as many MUAs that lacked a health center site in 2006 as the South census region. Overall, while the South census region experienced a decline of 12 percent in the number of MUAs that lacked a health center site, the other census regions experienced declines of approximately 4 percent. The South census region experienced the greatest decline in the number of MUAs lacking a health center site in 2007 compared to other census regions, in large part, because it was awarded more new access point grants that year than any other region. (See table 3.) Specifically, half of all new access point awards made in 2007—from two separate new access point competitions—went to applicants from the South census region. When we examined the High Poverty County new access point competition, in which 200 counties were targeted by HRSA for new health center sites, we found that 69 percent of those awards were granted to applicants from the South census region. (See fig. 3.) The greater number of awards made to the South census region for this competition may be explained by the fact that nearly two-thirds of the 200 counties targeted were located in the South census region. (For detail on the High Poverty County new access point competition by census region and state, see app. II.) When we examined the open new access point competition, which did not target specific areas, we found that the South census region also received a greater number of awards than any other region under that competition. Specifically, the South census region was granted nearly 40 percent of awards; in contrast, the Midwest received only 17 percent of awards. (See table 4.) HRSA oversees cooperative agreement recipients, but the agency’s oversight is limited because it does not have standardized performance measures to assess the performance of the cooperative agreement recipients in assisting new access point applicants and the agency is unlikely to meet its policy timeline for conducting comprehensive on-site reviews. Although HRSA officials told us that they were developing standardized performance measures, they provided no details on the specific measures that may be implemented. Moreover, more than a third of the summary statements sent to unsuccessful applicants for new access point competitions held in fiscal years 2005 and 2007 contained unclear feedback. HRSA oversees the activities of its cooperative agreement recipients using a number of methods. HRSA officials told us that over the course of a budget period, project officers use regular telephone and electronic communications to discuss cooperative agreement recipients’ activities as specified in work plans, review the status of these activities, and help set priorities. According to HRSA officials, there is no standard protocol for these communications, and their frequency, duration, and content vary over the course of a budget period and by recipient. HRSA staff also reviews annual noncompeting continuation applications to determine whether the cooperative agreement recipients provided an update on their progress, described their activities and challenges, and developed a suitable work plan and budget for the upcoming budget period. The progress reports submitted by cooperative agreement recipients in these annual applications serve as HRSA’s primary form of documentation on the status of cooperative agreement recipients’ activities. HRSA’s oversight of training and TA cooperative agreement recipients is based on performance measures tailored to the individual organization rather than performance measures that are standardized across all recipients. Specifically, HRSA uses individualized performance measures in cooperative agreement recipients’ work plans and comprehensive on- site reviews to assess recipients’ performance. For cooperative agreement recipients’ work plans, recipients propose training and technical assistance activities in response to HRSA’s cooperative agreement application guidance, in which the agency provides general guidelines and goals for the provision of training and technical assistance to health center grant applicants. The guidance requires recipients to develop performance measures for each activity in their work plans. When we analyzed the work plans of the 8 national organizations and 10 PCAs with training and TA cooperative agreements, we found that these measures varied by cooperative agreement recipient. For example, we found that for national organizations, performance measures varied from (1) documenting that the organization’s marketing materials were sent to PCAs to (2) recording the number of specific technical assistance requests the organization received to (3) producing monthly reports for HRSA detailing information about potential applicants. For state PCAs, measures varied from (1) the PCA providing application review as requested to (2) holding specific training opportunities—such as community development or board development—to (3) identifying a specific number of applicants the PCA would assist during the budget period. Because these performance measures vary for cooperative agreement recipients’ activities, HRSA does not have comparable measures to evaluate the performance of these activities across recipients. HRSA’s oversight of cooperative agreement recipients is limited in some key respects. One limitation is that the agency does not have standardized measures for its assessment of recipients’ performance of training and technical assistance activities. Without standardized performance measures, HRSA cannot effectively assess the performance of its cooperative agreement recipients with respect to the training and technical assistance they provide to support Health Center Program goals. For example, HRSA does not require that all training and TA cooperative agreement recipients be held to a performance measure that would report the number of successful applicants each cooperative agreement recipient helped develop in underserved communities, including MUAs. Standardized performance measures could help HRSA identify how to better focus its resources to help strengthen the performance of cooperative agreement recipients. HRSA officials told us that they are developing performance measures for the agency’s cooperative agreement recipients, which they plan to implement beginning with the next competitive funding announcement, scheduled for fiscal year 2009. However, HRSA officials did not provide details on the particular measures that it will implement, so it is unclear to what extent the proposed measures will allow HRSA to assess the performance of cooperative agreement recipients in supporting Health Center Program goals through such efforts as developing successful new access point grant applicants. HRSA’s oversight is also limited because the agency’s comprehensive on- site reviews of cooperative agreement recipients do not occur as frequently as HRSA policy states. According to HRSA’s stated policy, the agency will conduct these reviews for each cooperative agreement recipient every 3 to 5 years. The reviews are intended to assess—and thereby potentially improve—the performance of the cooperative agreement recipients in supporting the overall goals of the Health Center Program. This support can include helping potential applicants apply for health center grants, identifying underserved areas and populations across the country, and helping HRSA increase access to primary care services for underserved populations. As part of the comprehensive on-site reviews, HRSA officials consult with the relevant project officer, examine the scope of the activities cooperative agreement recipients have described in their work plans and reported in their progress reports, and develop performance measures in collaboration with the recipient. Similar to the performance measures in cooperative agreement recipients’ work plans, the performance measures used during comprehensive on-site reviews are also individually tailored and vary by recipient. For example, during these reviews, some recipients are assessed using performance measures that include the number of training and technical assistance hours the recipients provided; other recipients are assessed using measures that include the number of applicants that were funded after receiving technical assistance from the recipient or the percentage of the state’s uninsured population that is served by health center sites in the Health Center Program. After an assessment, HRSA asks the recipient to develop an action plan. In these action plans, the reviewing HRSA officials may recommend additional activities to improve the performance of the specific measures they had identified during the review. For example, if the agency concludes that a cooperative agreement recipient needs to increase the percentage of the state’s uninsured population served by health center sites in the Health Center Program, it may recommend that the recipient pursue strategies to develop a statewide health professional recruitment program and identify other funding sources to improve its ability to increase access to primary care for underserved people. Although HRSA’s stated policy is to conduct on-site comprehensive reviews of cooperative agreement recipients every 3 to 5 years, HRSA is unlikely to meet this goal for its training and TA cooperative recipients that target assistance to new access point applicants. In the 4 years since HRSA implemented its policy for these reviews in 2004, the agency has evaluated only about 20 percent of cooperative agreement recipients that provide training and technical assistance to grant applicants. HRSA officials told us that they have limited resources each year with which to fund the reviews. However, without these reviews, HRSA does not have a means of obtaining comprehensive information on the performance of cooperative agreement recipients in supporting the Health Center Program, including information on ways the recipients could improve the assistance they provide to new access point applicants. More than a third of summary statements sent to unsuccessful applicants from new access point grant competitions held in fiscal years 2005 and 2007 contained unclear feedback. Based on our analysis of 69 summary statements, we found that 38 percent contained unclear feedback associated with at least one of the eight evaluative criteria, while 13 percent contained unclear feedback in more than one criterion. We defined feedback as unclear when, in regard to a particular criterion, a characteristic of the application was noted as both a strength and a weakness without a detailed explanation supporting each conclusion. We found that 26 summary statements contained unclear feedback. We found 41 distinct examples of unclear feedback in the summary statements. (See table 5.) HRSA’s stated purpose in providing summary statements to unsuccessful applicants is to improve the quality of future grant applications. However, if the feedback HRSA provides in these statements is unclear, it may undermine the usefulness of the feedback for applicants and their ability to successfully compete for new access point grants. Based on our analysis, the largest number of examples of unclear feedback was found in the need criterion, in which applications are evaluated on the description of the service area, communities, target population—including the number served, encounter information, and barriers-—and the health care environment. For example, one summary statement indicated that the application clearly demonstrated and provided a compelling case for the significant health access problems for the underserved target population. However, the summary statement also noted that the application was insufficiently detailed and brief in its description of the target population. Seven of the examples of unclear feedback were found in the response criterion, in which applications are evaluated on the applicant’s proposal to respond the target population’s need. One summary statement indicated that the application detailed a comprehensive plan for health care services to be provided directly by the applicant or through its established linkages with other providers, including a description of procedures for follow-up on referrals or services with external providers. The summary statement also indicated that the application did not provide a clear plan of health service delivery, including accountability among and between all subcontractors. Awarding new access point grants is central to HRSA’s ongoing efforts to increase access to primary health care services in MUAs. From 2006 to 2007, HRSA’s recent new access point awards achieved modest success in reducing the percentage of MUAs nationwide that lacked a health center site. However, in 2007, 43 percent of MUAs continue to lack a health center site, and the new access point awards made in 2007 had little impact on the wide variation among census regions and states in the percentage of MUAs lacking a health center site. The relatively small effect of the 2007 awards on geographic variation may be explained, in part, because the South census region received a greater number of awards than other regions, even though the South was not the region with the highest percentage of MUAs lacking a health center site in 2006. HRSA awards new access point grants to open new health center sites, thus increasing access to primary health care services for underserved populations in needy areas, including MUAs. However, HRSA’s ability to target these awards and place new health center sites in locations where they are most needed is limited because HRSA does not collect and maintain readily available information on the services provided at individual health center sites. Having readily available information on the services provided at each site is important for HRSA’s effective consideration of need when distributing federal resources for new health center sites because each health center site may not provide the full range of comprehensive primary care services. This information can also help HRSA assess any potential overlap of services provided by health center sites in MUAs. HRSA could improve the number and quality of grant applications it receives—and thereby broaden its potential pool of applicants—by better monitoring the performance of cooperative agreement recipients that assist applicants and by ensuring that the feedback unsuccessful applicants receive is clear. However, limitations in HRSA’s oversight of the training and TA cooperative agreement recipients hamper the agency’s ability to identify recipients most in need of assistance. Because HRSA does not have standardized performance measures for these recipients— either for their work plan activities or for the comprehensive on-site reviews—the agency cannot assess recipients’ performance using comparable measures and determine the extent to which they support the overall goals of the Health Center Program. One standardized performance measure that could help HRSA evaluate the success of cooperative agreement recipients that assist new access point applicants is the number of successful grant applicants each cooperative agreement recipient develops; this standardized performance measure could assist HRSA in determining where to focus its resources to strengthen the performance of cooperative agreement recipients. HRSA’s allocation of available resources has made it unlikely that it will meet its goal of conducting comprehensive on-site reviews of each cooperative agreement recipient every 3 to 5 years. Without these reviews, HRSA does not have comprehensive information on the effectiveness of training and TA cooperative agreement recipients in supporting the Health Center Program, including ways in which they could improve their efforts to help grant applicants. Given the agency’s concern regarding available resources for its comprehensive on-site reviews, developing and implementing standardized performance measures for training and TA cooperative agreement recipients could assist HRSA in determining the cost-effectiveness of its current comprehensive on-site review policy and where to focus its limited resources. HRSA could potentially improve its pool of future applicants by increasing the extent to which it provides clear feedback to unsuccessful applicants on the strengths and weaknesses of their applications. HRSA intends for these summary statements to be used by applicants to improve the quality of future grant applications. However, the unclear feedback HRSA has provided to some unsuccessful applicants in fiscal years 2005 and 2007 does not provide those applicants with clear information that could help them improve their future applications. This could limit HRSA’s ability to award new access point grants to locations where such grants are needed most. We recommend that the Administrator of HRSA take the following four actions to improve the Health Center Program: Collect and maintain readily available data on the types of services provided at each health center site to improve the agency’s ability to measure access to comprehensive primary care services in MUAs. Develop and implement standardized performance measures for training and TA cooperative recipients that assist applicants to improve HRSA’s ability to evaluate the performance of its training and TA cooperative agreements. These standardized performance measures should include a measure of the number of successful applicants a recipient assisted. Reevaluate its policy of requiring comprehensive on-site reviews of Health Center Program training and TA cooperative agreement recipients every 3 to 5 years and consider targeting its available resources at comprehensive on-site reviews for cooperative agreement recipients that would benefit most from such oversight. Identify and take appropriate action to ensure that the discussion of an applicant’s strengths and weaknesses in all summary statements is clear. In commenting on a draft of this report, HHS raised concerns regarding the scope of the report and one of our recommendations and concurred with the other three recommendations. (HHS’s comments are reprinted in app. III.) HHS also provided technical comments, which we incorporated as appropriate. HHS said its most significant concern was with our focus on MUAs and the exclusion of MUPs from the scope of our report. In our analysis, we included the health center sites of 90 percent of all Health Center Program grantees. We excluded from our review sites that were associated with the remaining 10 percent of grantees that received HRSA funding to serve specific MUPs only because they are not required to serve all residents of the service area. Given our research objective to determine the location of health center sites that provide services to residents of an MUA, we excluded these specific MUPs and informed HRSA of our focus on health center sites and MUAs. We agree with HHS’s comment that it could be beneficial to have information on the number of grants awarded to programs serving both MUAs and MUPs generally to fully assess the coverage of health center sites. HHS also commented that our methodology did not account for the proximity of potential health center sites located outside the boundary of an MUA. While we did not explicitly account for the proximity of potential health center sites located outside an MUA, we did include the entire area of all zip codes associated with an MUA. As a result, the geographic boundary of an MUA in our analysis may be larger than that defined by HRSA, so our methodology erred on the side of overestimating the number of MUAs that contained a health center site. With regard to our reporting on the percentage of MUAs that lacked a health center site, HHS stated that this indicator may be of limited utility, because not all programs serving MUAs and MUPs are comparable to each other due to differences in size, geographic location, and specific demographic characteristics. Specifically, HHS commented that our analysis presumed that the presence of one health center site was sufficient to serve an MUA. In our work, we did not examine whether MUAs were sufficiently served because this was beyond the scope of our work. Moreover, since HRSA does not maintain site-specific information on services provided and each site does not provide the same services, we could not assess whether an MUA was sufficiently served. HHS also noted that a health center site may not be the appropriate solution for some small population MUAs; however, we believe it is reasonable to expect that residents of an MUA—regardless of its size, geographic location, and specific demographic characteristics—have access to the full range of primary care services. With regard to our first recommendation that HRSA collect and maintain site-specific data on the services provided at each health center site, HHS acknowledged that site-specific information would be helpful for many purposes, but it said collecting this information would place a significant burden on grantees and raise the program’s administrative expenses. We believe that having site-specific information on services provided would help HRSA better measure access to comprehensive primary health care services in MUAs when considering the placement of new health center sites and facilitate the agency’s ability to evaluate service area overlap in MUAs. HHS concurred with our three other recommendations. With regard to our second recommendation, HHS stated that HRSA will include standardized performance measures with its fiscal year 2009 competitive application cycle for state PCAs and that HRSA plans to develop such measures for the national training and TA cooperative agreement recipients in future funding opportunities. With regard to our third recommendation, HHS commented that HRSA has developed a 5-year schedule for reviewing all state PCA grantees. HHS also stated that HRSA is examining ways to better target onsite reviews for national training and TA cooperative agreement recipients that would most benefit from such a review. Finally, HHS agreed with our fourth recommendation and stated that HRSA is continuously identifying ways to improve the review of applications. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of HHS, the Administrator of HRSA, appropriate congressional committees, and other interested parties. We will also make copies of this report available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http:///www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members who made major contributions to this report are listed in appendix IV. In addition to the contact named above, Nancy Edwards, Assistant Director; Stella Chiang; Krister Friday; Karen Howard; Daniel Ries; Jessica Cobert Smith; Laurie F. Thurber; Jennifer Whitworth; Rachael Wojnowicz; and Suzanne Worth made key contributions to this report.
Health centers funded through grants under the Health Center Program--managed by the Health Resources and Services Administration (HRSA), an agency in the U.S. Department of Health and Human Services (HHS)--provide comprehensive primary care services for the medically underserved. HRSA provides funding for training and technical assistance (TA) cooperative agreement recipients to assist grant applicants. GAO was asked to examine (1) to what extent medically underserved areas (MUA) lacked health center sites in 2006 and 2007 and (2) HRSA's oversight of training and TA cooperative agreement recipients' assistance to grant applicants and its provision of written feedback provided to unsuccessful applicants. To do this, GAO obtained and analyzed HRSA data, grant applications, and the written feedback provided to unsuccessful grant applicants and interviewed HRSA officials. Grant awards for new health center sites in 2007 reduced the overall percentage of MUAs lacking a health center site from 47 percent in 2006 to 43 percent in 2007. In addition, GAO found wide geographic variation in the percentage of MUAs that lacked a health center site in both years. Most of the 2007 nationwide decline in the number of MUAs that lacked a site occurred in the South census region, in large part, because half of all awards made in 2007 for new health center sites were granted to the South census region. GAO also found that HRSA lacks readily available data on the services provided at individual health center sites. HRSA oversees training and TA cooperative agreement recipients, but its oversight is limited in key respects and it does not always provide clear feedback to unsuccessful grant applicants. HRSA oversees recipients using a number of methods, including regular communications, review of cooperative agreement applications, and comprehensive on-site reviews. However, the agency's oversight is limited because it lacks standardized performance measures to assess the performance of the cooperative agreement recipients and it is unlikely to meet its policy goal of conducting comprehensive on-site reviews of these recipients every 3 to 5 years. The lack of standardized performance measures limits HRSA's ability to effectively evaluate cooperative agreement recipients' activities that support the Health Center Program's goals with comparable measures. In addition, without timely comprehensive on-site reviews, HRSA does not have up-to-date comprehensive information on the performance of these recipients in supporting the Health Center Program. HRSA officials stated that they are in the process of developing standardized performance measures. Moreover, more than a third of the written feedback HRSA sent to unsuccessful Health Center Program grant applicants in fiscal years 2005 and 2007 contained unclear statements. The lack of clarity in this written feedback may undermine its usefulness rather than enhance the ability of applicants to successfully compete for grants in the future.
You are an expert at summarizing long articles. Proceed to summarize the following text: The Food Stamp Program provides low-income households with paper coupons or electronic benefits that can be redeemed for food in about 156,000 stores across the nation. In fiscal year 2001, the Congress appropriated $20.1 billion for the Food Stamp Program. FNS establishes regulations for implementing the Food Stamp Program, reviews states’ operating plans to ensure compliance with the regulations, and funds food stamp benefits and about half of the states’ administrative costs. The states administer the program by determining whether households meet the program’s income and asset requirements, calculating monthly benefits for qualified households, and issuing benefits to participants. Household eligibility and benefit amounts are based on nationwide federal criteria, including household size and income, assets, housing costs, and work requirements. FNS monitors states’ performance by assessing how accurately they determine food stamp eligibility and calculate benefits. Under FNS’ quality control system, the states calculate their payment errors by annually drawing a statistical sample of at least 300 to 1,200 active cases, depending on the average monthly caseload. The states review case information and make home visits to determine whether households were eligible for benefits and received the correct benefit payment. FNS regional offices validate the results by reviewing a subset of each state’s sample to determine its accuracy, and make adjustments to the state’s overpayment and underpayment errors as necessary. Until the mid-1990s, most recipients used benefits provided in the form of coupons to purchase allowable food. According to FNS, as of March 2001, 41 states, the District of Columbia, and Puerto Rico have operational food stamp EBT systems. Thirty-nine of these systems are operating statewide. All states are to implement EBT systems by October 1, 2002, unless USDA waives the requirement. By providing benefits electronically, the federal government saves time and money because the process of providing the coupons is eliminated. Furthermore, an EBT system creates an electronic record of each food stamp transaction, making it easier to identify and document instances of fraud and abuse in the program. Recent legislative initiatives to reform welfare have also affected Food Stamp Program operations. Specifically, the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA), which was passed in 1996 to reform the nation’s welfare system, also modified aspects of the Food Stamp Program. To reform welfare, PRWORA replaced the Aid to Families with Dependent Children entitlement program with the TANF program and gave the states responsibility for administering TANF through block grant funding. In implementing welfare reform, the states have, for example, used PRWORA’s flexibility to (1) require that applicants look for jobs before their TANF applications are processed; (2) offer one-time, lump-sum payments (known as diversion payments) to potential applicants rather than enroll them in the TANF program; and (3) disqualify individuals from participation in the Food Stamp Program if they have committed TANF violations, thereby reducing the household’s total food stamp benefit. Almost all of the states use a single application for the food stamp and welfare programs to reduce administrative costs, even though the eligibility rules for these two programs are different. Though welfare reform retained the Food Stamp Program as an entitlement for qualifying participants, it tightened eligibility requirements and eased administrative requirements. It disqualified able-bodied adults without dependents who, during the preceding 36-month period, received food stamp benefits for at least 3 months but worked less than 20 hours per week. Similarly, the act required that the states, by August 1997, remove from their rolls most permanent resident aliens who were previously eligible to receive food stamps. In addition, PRWORA replaced several specific administrative requirements with more general standards that give states more flexibility in operating their food stamp programs. Over the years, we have reported on program integrity concerns in the Food Stamp Program. Fraud, waste, and abuse in the program generally occur in the form of either improper payments to food stamp recipients or trafficking in food stamp benefits. The states and FNS have taken steps to reduce inaccurate payments to food stamp recipients and reduce trafficking in food stamp benefits. However, all of the state officials we contacted for a recent study believe that the most effective way to reduce payment errors and program costs is to simplify food stamp rules, such as those pertaining to program eligibility. Inaccurate payments can be in the form of overpayments or underpayments to food stamp recipients. Overpayments occur when ineligible persons are provided food stamps, as well as when eligible persons are provided more than they are entitled to receive. Overpayments can be caused by inadvertent or intentional errors made by recipients and caseworkers. According to FNS’ quality control system, the states overpaid food stamp recipients about $976 million in fiscal year 2000 and underpaid recipients about $360 million. Together, overpayment and underpayment errors amounted to about 9 percent of food stamp benefits. About 54 percent of these errors occurred when state food stamp workers made mistakes, such as misapplying complex food stamp rules in calculating benefits. The remaining 46 percent of the errors occurred because participants, either inadvertently or deliberately, did not provide accurate information to state food stamp offices. While the states and FNS have taken steps to address payment errors, state officials told us that they believe simplifying food stamp rules will have the greatest impact on reducing payment errors and program cost. In a recent report we identified states’ efforts to minimize food stamp payment errors and examined what FNS has done or could do to encourage and assist the states in reducing such errors. We found that all 28 states we contacted had taken action in recent years to reduce payment errors. While these states took various actions to reduce payment error rates, most states took the following five actions: verified the accuracy of benefit payments calculated by state food stamp workers through supervisory and other types of case file reviews, provided specialized training for state food stamp workers, analyzed quality control data to identify causes of common payment errors and develop corrective actions, matched food stamp rolls with other federal and state computer databases to identify ineligible participants or verify income and asset information provided by food stamp recipients, and used computer software programs to assist caseworkers in determining benefit amounts. Some states also increased the frequency with which certain types of food stamp households must provide documentation in order to maintain their eligibility for food stamp benefits—a process called recertification. For example, even though FNS regulations require that food stamp households be recertified only annually, almost half of the states we contacted require households with earned income to be recertified quarterly because their incomes tend to fluctuate, increasing the likelihood of payment errors. More frequent certification enables caseworkers to verify the accuracy of household income and make appropriate adjustments to household benefits, possibly avoiding a payment error. However, more frequent certification can also inhibit program participation for eligible participants because it creates additional reporting burdens for food stamp recipients. FNS has taken several steps to encourage states to minimize their payment error rates, including providing financial incentives to states that have error rates substantially below the national average and imposing financial sanctions on states that exceed the national average. When error rates are too high, states are required to either pay a penalty fee or provide additional state funds—beyond their normal share of administrative costs—to be reinvested in error-reduction efforts, such as additional training in calculating benefits. In fiscal year 2000, FNS imposed $46 million in financial sanctions on 18 states whose error rates were above the national average of 9 percent. In that same year, FNS provided $55 million in enhanced funding to 11 states whose payment error rates were less than or equal to 5.9 percent—well below the national average. FNS also has reduced the opportunity for payment errors by allowing the states to reduce food stamp reporting requirements for certain recipients. For example, FNS expanded the availability of waivers related to reporting requirements, such as the waiver that raises the earned income changes that households must report. FNS was concerned that the increase in employment among food stamp households would result in larger and more frequent income fluctuations, which would increase the risk of payment errors. FNS also was concerned that the states’ reporting requirements were particularly burdensome for the working poor and may, in effect, act as an obstacle to their participation in the program because eligible households may not view food stamp benefits as worth the time and effort it takes to obtain them. As a result of these concerns, FNS established regulations in November 2000 that gave states the option to require food stamp households with earned income to report changes semiannually, unless a change would result in a household’s gross monthly income exceeding 130 percent of the monthly poverty income guideline. Finally, FNS has promoted initiatives to improve payment accuracy through the exchange of “best practices” information among states. Since 1996, FNS has compiled catalogs of states’ payment accuracy practices that provide information designed to help other states develop and implement similar initiatives. While imposing financial sanctions, offering incentives, and granting waivers related to food stamp reporting requirements can help states reduce payment errors, the 28 state officials we spoke with believed that simplifying the complex food stamp requirements for determining eligibility and calculating benefits offered the greatest potential for additional reductions in payment errors. In supporting simplification, the state officials generally cited caseworkers’ difficulty in correctly applying food stamp rules to determine eligibility and calculate benefits. Specifically, the state officials cited the need to simplify requirements for (1) determining a household’s deduction for excess shelter costs and (2) calculating a household’s earned and unearned income. The states also cited the need to simplify food stamp rules for determining the valuation of vehicles. The Food Stamp Act of 1977 was recently revised to allow the states to use the same vehicle valuation rules that they use for TANF, if these rules would result in fewer assets attributed to the household. Food stamp officials in 20 of the 28 states we contacted said simplifying the rules for determining a household’s allowable shelter deduction would be one of the best ways to reduce payment errors. The Food Stamp Program generally provides for a shelter deduction when a household’s monthly shelter costs exceed 50 percent of income after other deductions have been allowed. Allowable deductions include rent or mortgage payments, property taxes, homeowner’s insurance, and utility expenses. Food stamp officials in 18 states told us that simplifying the rules for earned income would be one of the best options for reducing payment errors because earned income is both the most common and the costliest source of payment errors. Generally, the process of determining earned income is prone to errors because caseworkers must use current earnings as a predictor of future earnings and the working poor do not have consistent employment and earnings. Similarly, officials in six states told us that simplifying the rules for unearned income would help reduce payment errors. In particular, state officials cited the difficulty caseworkers have in estimating child support payments that will be received during the certification period because payments are often intermittent and unpredictable. Because households are responsible for reporting changes in unearned income of $25 or more, unreported changes and child support payments often result in a payment error. In our view, simplifying the program’s rules and regulations offers an opportunity to, among other things, reduce payment error rates and promote program participation by eligible recipients. We have recommended that FNS develop and analyze options to simplify requirements for determining program eligibility and benefits and, if warranted, submit legislative proposals to simplify the program. As part of its preparations for the program’s upcoming reauthorization, FNS has begun to examine alternatives for improving the Food Stamp Program, including options to simplify requirements for determining benefits. While payment errors affect whether food stamp recipients receive appropriate food stamp benefits, trafficking results in the improper use of benefits. In March 2000, FNS estimated that stores trafficked in about $660 million a year in food stamp benefits, or about 3-1/2 cents of every dollar of food stamp benefits issued. In the past, we have reported on federal efforts to identify storeowners who engage in trafficking, the amount of penalties assessed and collected against these storeowners, and states’ efforts to identify and disqualify recipients who engage in trafficking. We discovered the following: FNS does not sufficiently use electronic databases to identify storeowners who engage in trafficking. While FNS and USDA’s Office of Inspector General use a variety of sources, including EBT databases, to identify suspect traffickers, we have noted in various reports that electronic data could be used more effectively to identify additional storeowners and recipients engaged in trafficking. In addition, we found that most states with statewide EBT systems were not independently analyzing EBT data to identify recipients who may be trafficking in food stamp benefits. While FNS almost always assessed penalties against storeowners when its investigations showed they had violated the program’s requirements, storeowners generally did not pay the assessed financial penalties. According to agency officials, the small percentage of fines they are able to collect reflect the difficulties involved in collecting this type of debt, such as problems in locating debtors and their refusal to pay. However, we found that weaknesses in the agency’s debt collection procedures and practices also contributed to low collections. We made several recommendations to FNS on ways it could improve the integrity of the program by more effectively using EBT data, including providing guidelines to states on reviewing electronic data. FNS has begun to take steps to implement our recommendations. In addition, along with USDA’s Office of Inspector General, we recognize the importance of improving state use of EBT data and we are working with USDA to determine best practices for using these data to identify food stamp recipients and storeowners who may be defrauding or abusing the program. Participation in the Food Stamp Program has dropped by about 33 percent during the past 4-1/2 years. The monthly average number of participants declined from 25.5 million in fiscal year 1996 to about 17.1 million in the first half of fiscal year 2001. Although factors such as the strong U.S. economy and tighter eligibility requirements have been cited as primary reasons for the dramatic decline in food stamp participation in recent years, there remains a large gap between the number of people eligible to receive benefits and the number participating in the program. Some of this gap may be explained by other factors, such as past initiatives designed to reduce TANF caseloads, confusion about eligibility requirements after the passage of PRWORA, and administrative burdens placed on food stamp participants that might discourage participation. In 1999 we reported that the strong U.S. economy was one of the primary factors contributing to the decline in food stamp participation. Since more people were employed and earning more money, the number of people who met the program’s income eligibility standard decreased. In addition, the length of time some people spent on the food stamp rolls was reduced because they found new jobs more quickly. Finally, when the economy is strong, the percentage of eligible people participating in the program may be indirectly lowered. This is because, as households’ income levels rise and food stamp benefits fall proportionally, households may decide not to apply or seek recertification for these benefits, especially when they approach the minimum benefit level of $10 per month. We also reported that tighter food stamp eligibility requirements have also contributed to the decline in food stamp participation. Specifically, the passage of PRWORA tightened eligibility requirements for able-bodied adults without dependents, making fewer people eligible for food stamps. During fiscal year 1997, participation in the Food Stamp Program by these two groups fell by about 714,000 people, accounting for about 25 percent of the decline in food stamp participation that year. While some of the decline in participation can be explained by factors that reduce the overall number of people eligible to receive benefits, an increasing percentage of people eligible for food stamp benefits are not participating in the program. Specifically, FNS estimates that only about 59 percent of eligible people in the United States received food stamp benefits in September 1998—a 12 percentage-point drop from the estimated 71 percent of eligible people participating in September 1994. In addition, there is evidence to suggest that a growing gap exists between the number of children living in poverty—an important indicator of children’s need for food assistance—and the number of children receiving food stamp assistance. Between 1995 and 1999, the number of children receiving food stamp benefits declined by 33 percent, while the number of children living in poverty declined by only 17 percent. Further, during this same period of time, the number of children served free lunches in USDA’s National School Lunch Program increased by about 4 percent. In 1999, we reported that state and local initiatives designed to reduce the TANF caseloads contributed to the decline in their food stamp rolls. In several states and localities, FNS identified barriers to food stamp participation and policies that improperly removed eligible households with children from the food stamp rolls as a sanction for a TANF violation. This occurred, in part, because FNS had still not established regulations that implemented PRWORA’s revisions to the Food Stamp Act and the guidance it had already issued was considered nonbinding. In addition, we found that only three of FNS’ seven regional offices regularly conducted annual reviews of each state in their jurisdiction, even though FNS’ regulations require such reviews. FNS regional offices had not examined program access in nine states and the District of Columbia from October 1996 through June 1999. These reviews have previously identified obstacles, such as gaining access to benefits, which might inhibit individuals from participating in the program. To ensure that eligible people receive food stamp benefits, we recommended that FNS establish regulations requiring that the states (1) inform each applicant for assistance of the right to apply for food stamps during the first meeting and (2) limit sanctions on the food stamp benefits to only the individual—not the household—who does not comply with a welfare requirement. FNS established regulations in November 2000 and January 2001 that implemented both parts of our recommendation. We also recommended that FNS give higher priority to aggressively targeting obstacles related to participants’ access to food stamp benefits in reviewing states’ food stamp operations. Since our report was issued, FNS has conducted participant access reviews in each of the 50 states; Washington, D.C.; and the Virgin Islands. A 1999 report noted that most nonparticipating households estimated to be eligible for food stamp participation—including those who had previously participated in the program—did not apply because they did not think they were eligible. The food stamp directors of four FNS regional offices agreed that implementation of TANF has been an important factor in the decline in participation in their regions. According to these directors, many people do not apply for food stamps because they assume that if they are ineligible for TANF, they are also ineligible for food stamps. We recommended that FNS publicize eligibility requirements for the Food Stamp Program and distinguish them from the eligibility requirements for TANF. Soon after that recommendation was made, FNS launched a public awareness campaign to better publicize food stamp eligibility requirements in hopes of improving participation. In addition, recent research has indicated that some eligible households may not participate in the Food Stamp Program because of the perceived difficulty of doing so. Specifically, those who were aware they were eligible for food stamps but chose not to participate most often cited reasons related to the administrative burden of applying, such as the time and costs involved. One survey found that, on average, applicants spent nearly 5 hours and made at least two trips to the local food stamp office to apply for food stamps. If states are increasing the frequency with which certain types of households must be recertified to reduce the likelihood of payment errors, program participation may be inhibited because of the additional reporting burdens for food stamp recipients. FNS and the states have taken actions to reduce fraud, waste, and abuse in the Food Stamp Program. Our past work has found that FNS and the states need to make better use of electronic data to track individuals and storeowners who may be trafficking in food stamp benefits. We also found that financial sanctions and enhanced funding have been at least partially successful in focusing states’ attention on minimizing payment errors. However, this “carrot and stick” approach can accomplish only so much. Food stamp regulations for determining eligibility and benefits are extremely complex and their application is inherently error-prone and costly to administer. Furthermore, this approach, carried to extremes, can create incentives for states to take actions that may inhibit achievement of one of the agency’s basic missions—providing food assistance to those who are in need. For example, increasing the frequency with which recipients must report income changes could decrease errors, but it could also have the unintended effect of discouraging participation by the eligible working poor. This would run counter not only to FNS’ basic mission but also to an overall objective of welfare reform—helping people move successfully from public assistance into the workforce. Simplifying the Food Stamp Program’s rules and regulations offers an opportunity to reduce payment error rates and promote program participation by eligible recipients. FNS has begun to look at options for simplifying requirements for determining benefits. However, in view of the upcoming reauthorization, it is critical that FNS follow through with this process and develop options that strike an appropriate balance between the sometimes competing objectives of ensuring program integrity and encouraging eligible individuals to participate. To be successful, this process must include a continuing dialogue with all appropriate stakeholders, including congressional members and state officials, and must ensure that actions are taken to streamline the program while at the same time improving program integrity. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions that you or other members of the Subcommittee may have. For future contacts regarding this testimony, I can be contacted at (202) 512-7215. Key contributors to this testimony were Dianne Blank, Elizabeth Morrison, Debra Prescott, and Suzanne Lofhjelm. (130054)
The Food and Nutrition Service (FNS) and the states have taken steps to reduce fraud, waste, and abuse in the Food Stamp Program. GAO's past work has found that FNS and the states need to make better use of electronic data to track individuals and storeowners who may be trafficking in food stamps. GAO also found that financial sanctions and enhanced funding have been at least partially successful in focusing states' attention on minimizing payment errors. However this "carrot and stick" approach can accomplish only so much. Food stamp regulations for determining eligibility and benefits are extremely complex and their application is inherently error-prone and costly to administer. Furthermore, this approach, carried to extremes, can create incentives for states to take actions that may inhibit achievement of one of the agency's basic missions--providing food assistance to needy persons. For example, requiring recipients to report income changes more frequently could decrease errors, but it could also have the unintended effect of discouraging participation by the eligible working poor. This would run counter not only to FNS' basic mission but also to an overall objective of welfare reform--helping people move successfully from public assistance into the workforce. Simplifying the Food Stamp Program's rules and regulations could reduce payment error rates and promote program participation by eligible recipients. FNS has begun to look at ways to simplify requirements for determining benefits. However, in view of the upcoming reauthorization, it is critical that FNS follow through with this process and develop options that strike an appropriate balance between the sometimes competing objectives of ensuring program integrity and encouraging eligible individuals to participate. To be successful, this process must include a continuing dialogue with all appropriate stakeholders, including Congress and state officials, and must ensure that steps are taken to streamline the program while at the same time improving program integrity.
You are an expert at summarizing long articles. Proceed to summarize the following text: In fiscal year 1986, Congress directed DOD to destroy the U.S. stockpile of lethal chemical agents and munitions. DOD designated the Department of the Army as its executive agent for the program, and the Army established the Chemical Demilitarization (or Chem-Demil) Program, which was charged with the destruction of the stockpile at nine storage sites. Incineration was selected as the method to destroy the stockpile. In 1988, the Chemical Stockpile Emergency Preparedness Program (CSEPP) was created to enhance the emergency management and response capabilities of communities near the storage sites in case of an accident; the Army and the Federal Emergency Management Agency (FEMA) jointly managed the program. In 1997, consistent with congressional direction, the Army and FEMA clarified their CSEPP roles by implementing a management structure under which FEMA assumed responsibility for off-post (civilian community) program activities, while the Army continued to manage on-post chemical emergency preparedness. The Army provides CSEPP funding to FEMA, which is administered via grants to the states and counties near where stockpile sites are located in order to carry out the program’s off-post activities. Agent destruction began in 1990 at Johnston Atoll in the Pacific Ocean. Subsequently, Congress directed DOD to evaluate the possibility of using alternative technologies to incineration. In 1994, the Army initiated a project to develop nonincineration technologies for use at the two bulk-agent only sites at Aberdeen, Maryland, and Newport, Indiana. These sites were selected in part because their stockpiles were relatively simple—each site had only one type of agent and this agent was stored in bulk-agent (ton) containers. In 1997, DOD approved pilot testing of a neutralization technology at these two sites. Also in 1997, Congress directed DOD to evaluate the use of alternative technologies and suspended incineration planning activities at two sites with assembled weapons in Pueblo, Colorado, and Blue Grass, Kentucky. Furthermore, Congress directed that these two sites be managed in a program independent of the Army’s Chem-Demil Program and report to DOD instead of the Army. Thus, the Assembled Chemical Weapons Assessment (ACWA) program was established. The nine sites, the types of agent, and the percentage of the original stockpiles are shown in table 1. In 1997, the United States ratified the CWC, which prohibits the use of these weapons and mandates the elimination of existing stockpiles by April 29, 2007. A CWC provision allows that extensions of up to 5 years can be granted. The CWC also contains a series of interim deadlines applicable to the U.S. stockpile (see table 2). The United States met the 1 percent interim deadline in September 1997 and the 20 percent interim deadline in July 2001. As of June 2003, the Army was reporting that a total of about 26 percent of the original stockpile had been destroyed. Three other countries (referred to as states parties)—India, Russia, and one other country—have declared chemical weapons stockpiles and are required to destroy them in accordance with CWC deadlines as well. As of April 2003, two of these three countries (India and one other country) had met the 1 percent interim deadline to destroy their stockpiles. Of the three countries, only India met the second (20 percent) interim deadline to destroy its stockpile by April 2002. However, Russia, with the largest declared stockpile—over 40,000 tons— did not meet the 1 percent or the 20 percent interim deadlines, and only began destroying its stockpile in December 2002. In 2001, Russia requested a 5-year extension to the 2007 deadline. Russia did destroy 1 percent of its stockpile by April 2003, although it is doubtful that it will meet the 2012 deadline if granted. Traditionally, management and oversight responsibilities for the Chem-Demil Program reside primarily within three levels at DOD—the Under Secretary of Defense (Acquisition, Technology, and Logistics) who is the Defense Acquisition Executive for the Secretary of Defense, the Assistant Secretary of the Army (Acquisition, Logistics, Technology) who is the Army Acquisition Executive for the Army, and the Program Manager for Chemical Demilitarization—because it is a major defense acquisition program. In addition to these offices, since August 2002, the Deputy Assistant to the Secretary of Defense (Chemical Demilitarization and Threat Reduction), has served as the focal point responsible for oversight, coordination, and integration of the Chem-Demil Program. In May 2001, in response to program cost, schedule, and management concerns, milestone decision authority was elevated to the Under Secretary of Defense (Acquisition, Technology, and Logistics). DOD stated that this change would streamline future decision making and increase program oversight. DOD indicated that the change was also consistent with the size and scope of the program, international treaty obligations, and the level of local, state, and federal interest in the safe and timely destruction of the chemical stockpile. In September 2001, after more than a yearlong review, DOD revised the program’s schedule milestones for seven of the nine sites and the cost estimates for all nine sites. These milestones represent the target dates that each site is supposed to meet for the completion of critical phases of the project. The phases include design, construction, systemization, operations, and closure. (Appendix II describes these phases and provides the status of each site.) The 2001 revision marked the third time the program extended its schedule milestones and cost estimates since it became a major defense acquisition program in 1994. The 2001 revision also pushed the milestones for most sites several years beyond the previous 1998 schedule milestones and, for the first time, beyond the April 2007 deadline contained in the CWC. Table 3 compares the 1998 and 2001 schedule milestones for starting and finishing agent destruction operations at the eight sites with chemical agent stockpiles in 2001. The planned agent destruction completion date at some sites was extended over 5 years. DOD extended the schedule milestones to reflect the Army’s experience at the two sites—Johnston Atoll and Tooele—that had begun the destruction process prior to 2001. It found that previous schedule milestones had been largely based on overly optimistic engineering estimates. Lower destruction rates stipulated by environmental regulators, and increased time needed to change the facility’s configuration when switching between different types of chemical agents and weapons, meant destruction estimates needed to be lengthened. Moreover, experience at Johnston Atoll, which began closure activities in 2000, revealed that previous closure estimates for other sites had been understated. In addition, DOD’s Cost Analysis Improvement Group modified the site schedules based on a modeling technique that considered the probabilities of certain schedule activities taking longer than anticipated. In particular, the group determined that the operations phase, where agent destruction takes place, has the highest probability for schedule delays and lengthened that phase the most. Because the costs of the program are directly related to the length of the schedule, DOD also increased the projected life-cycle costs, from $15 billion in 1998 to $24 billion in 2001 (see fig. 1). In December 2001, after the program schedule and costs were revised, the Army transferred primary program oversight from the Office of the Assistant Secretary of the Army (Acquisition, Logistics, and Technology) to the Office of the Assistant Secretary of the Army (Installations and Environment). According to the Army, this move streamlined responsibilities for the program, which were previously divided between these two offices. In January 2003, the Army reassigned oversight responsibilities to the Assistant Secretary of the Army (Acquisition, Logistics, and Technology) for all policy and direction for the Chem-Demil Program and CSEPP. The Secretary of the Army also directed the Assistant Secretary of the Army (Acquisition, Logistics, and Technology) and the Commanding General, U.S. Army Materiel Command, to jointly establish an agency to perform the chemical demilitarization as well as the chemical weapons storage functions. In response to this directive, the Army announced the creation of a new organization—the Chemical Materials Agency (CMA)—which will merge the demilitarization and the storage functions. During this transition process, the Program Manager for Chemical Demilitarization was redesignated as the Program Manager for the Elimination of Chemical Weapons and will report to the Director of CMA and have responsibility for each site through the systemization phase. The Director for Operations will manage the operations and closure phases. As of June 2003, the Program Manager for the Elimination of Chemical Weapons was providing day-to-day management for the sites at Anniston, Umatilla, Newport, and Pine Bluff; the Director for Operations was providing day-to-day management for the sites at Tooele, Aberdeen, and Johnston Atoll, and the Program Manager, ACWA, was managing the sites at Pueblo and Blue Grass. Since 1990, we have issued a number of reports that have focused on management, cost, and schedule issues related to the Chem-Demil Program. For example, in a 1995 testimony we cited the possibility of further cost growth and schedule slippage due to environmental requirements, public opposition to the baseline incineration process, and lower than expected disposal rates. We also testified that weaknesses in financial management and internal control systems have hampered program results and alternative technologies were unlikely to mature enough to meet CWC deadlines. In 1995, we noted that the emergency preparedness program had been slow to achieve results and that communities were not fully prepared to respond to a chemical emergency. In 1997, we found high-level management attention was needed at the Army and FEMA to clearly define management roles and responsibilities. In 2001, we found that the Army and FEMA needed a more proactive approach to improve working relations with CSEPP states and local communities and to assist them in preparing budgets and complying with program performance measures. In 2000, we found that the Chem-Demil Program was hindered by its complex management structure and ineffective coordination between program offices. We recommended that the Secretary of Defense direct the Secretary of the Army to clarify the management roles and responsibilities of program participants, assign accountability for achieving program goals and results, and establish procedures to improve coordination among the program’s various elements and with state and local officials. A detailed list of these reports and other products is included in Related GAO Products at the end of this report. Despite recent efforts to improve the management and streamline the organization of the Chem-Demil Program, the program continues to falter because several long-standing leadership, organizational, and strategic planning weaknesses remain unresolved. The absence of sustained leadership confuses decision-making authority and obscures accountability. In addition, the Army’s recent reorganization of the program has not reduced its complex organization nor clarified the roles and responsibilities of various entities. For example, CMA reports to two different offices with responsibilities for different phases of the program and left the management of CSEPP divided between the Army and FEMA. The ACWA program continues to be managed outside of the Army as directed by Congress. Finally, the lack of an overarching, comprehensive strategy has left the Chem-Demil Program without a top-level road map to guide and monitor the program’s activities. The absence of effective leadership, streamlined organization, and important management tools, such as strategic planning, creates a barrier to the program accomplishing the safe destruction of the chemical stockpile and staying within schedule milestones, thereby raising program costs. The Chem-Demil Program has experienced frequent shifts in leadership providing oversight, both between DOD and the Army and within the Army, and frequent turnover in key program positions. These shifts have led to confusion among participants and stakeholders about the program’s decision making and have obscured accountability. For example, program officials were not consistent in following through on promised initiatives and some initiatives were begun but not completed. Also, when leadership responsibilities changed, new initiatives were often introduced and old initiatives were abandoned, obscuring accountability for program actions. The program has lacked sustained leadership above the program level as demonstrated by the multiple shifts between DOD and the Army for providing oversight that affects consistent decision making. The leadership responsible for oversight has shifted between the Army and DOD three times during the past two decades, with the most recent change occurring in 2001. Table 4 summarizes these changes. As different offices took over major decision authority, program emphasis frequently shifted, leaving initiatives pursued but not completed, consistency of initiatives was not maintained, and responsibility for decisions shifted. For example, we reported in August 2001 that the Army and FEMA had addressed some management problems in how they coordinated emergency preparedness activities after they had established a memorandum of understanding to clarify roles and responsibilities related to CSEPP. However, according to FEMA officials, DOD did not follow the protocols for coordination as agreed upon with the Army when making decisions about emergency preparedness late in 2001. This led to emergency preparedness items being funded without adequate plans for distribution, which delayed the process. These changes in oversight responsibilities also left the stakeholders in the states and local communities uncertain as to the credibility of federal officials. Leadership responsibilities for the program within the Army have also transferred three times from one assistant secretary to another (see table 5). During this time, there were numerous CSEPP issues that the Army took positions on with which FEMA did not concur. For example, in August 2002, the Assistant Secretary of the Army (Installations and Environment) officials committed to funding nearly $1 million to study building an emergency operations center for a community near Umatilla with additional funds to be provided later. Since the program shifted to the Assistant Secretary of the Army (Acquisition, Logistics, and Technology) in 2003, program officials have been reconsidering this commitment. The problem of Army and FEMA not speaking with one voice led to confusion among state and local communities. Further, dual or overlapping authority by the Assistant Secretary of the Army (Acquisition, Logistics, and Technology) and the Assistant Secretary of the Army (Installations and Environment) in 2001 was not clarified. Without clear lines of authority, one office took initiatives without consulting the other. As a result, stakeholders were unclear if initiatives were valid. In addition to these program shifts, the Deputy Assistant Secretary of the Army (Chemical Demilitarization)—an oversight office moved from DOD to the Army in 1998—reported to the Assistant Secretary of the Army (Acquisition, Logistics, and Technology) from 1998 until 2001, then to the Assistant Secretary of the Army (Installations and Environment) until 2003, and now again to the Assistant Secretary of the Army (Acquisition, Logistics, and Technology). These many shifts in this oversight office with responsibility for programmatic decisions left stakeholders confused about this office’s oversight role and about the necessity of funding requests it made. As a result, the accumulation of extra funding ultimately caused Congress to cut the program’s budget. The Chem-Demil Program has experienced a number of changes and vacancies in key program leadership positions, which has obscured accountability. This issue is further compounded, as discussed later, by the lack of a strategic plan to provide an agreed upon road map for officials to follow. Within the Army, three different officials have held senior leadership positions since December 2001. In addition, five officials have served as the Deputy Assistant Secretary of the Army (Chem-Demil) during that time. The program manager’s position remained vacant for nearly 1 year, from April 2002 to February 2003, before being filled. However, in June, after only 4 months, the program manager resigned and the Army named a replacement. Frequent shifts in key leadership positions led to several instances where this lack of continuity affected decision making and obscured accountability. For example, in June 2002, a program official promised to support future funding requests for emergency preparedness equipment from a community, but his successor did not fulfill this promise. This promise caused communities to submit several funding requests that were not supported. The lack of leadership continuity makes it unclear who is accountable when commitments are made but not implemented. Moreover, when key leaders do not remain in their positions long enough to develop the needed long-term perspective (on program issues) or to effectively follow through on program initiatives, it is easy for them to deny responsibility for previous decisions and avoid current accountability. The recent reorganization by the Army has not streamlined the program’s complex organization or clarified roles and responsibilities. For example, the Director of CMA will now report to two different senior Army organizations, which is one more than under the previous structure. This divided reporting approach is still not fully developed, but it may adversely affect program coordination and accountability. The reorganization has also divided the responsibility for various program phases between two offices within CMA. One organization, the Program Manager for the Elimination of Chemical Weapons, will manage the first three phases for each site and a newly created organization, the Director of Operations, will manage the final two phases. This reorganization changes the cradle-to-grave management approach that was used to manage sites in the past and has blurred responsibities for officials who previously provided support in areas such as quality assurance and safety. Moreover, the reorganization did not address two program components—community- related CSEPP and ACWA. CSEPP will continue to be jointly managed with FEMA. ACWA, as congressionally directed, will continue to be managed separately from the Army by DOD. During the transition process, no implementation plan was promulgated when the new organization was first announced in January 2003. As of June 2003, the migration of roles and responsibilities formerly assigned to the office of the Program Manager for Chemical Demilitarization into the new CMA had not been articulated. For example, several key CMA officials who had formerly been part of the former program office told us that they were unsure of their new roles within CMA and the status of ongoing program initiatives. Furthermore, past relationships and responsibilities among former program offices and site activities have been disrupted. Although the establishment of CMA with a new directorate responsible for operations at Tooele and Aberdeen is underway, former program office staff told us they did not know how this new organization would manage the sites in the future. While DOD and the Army have issued numerous policies and guidance documents for the Chem-Demil Program, they have not developed an overarching, comprehensive strategy or an implementation plan to guide the program and monitor its progress. Leading organizations embrace principles for effectively implementing and managing programs. Some key aspects of this approach include promulgating a comprehensive strategy to include mission, long-term goals, and methods to accomplish these goals and an implementation plan that includes annual performance goals, measurable performance indicators, and evaluation and corrective action plans. According to DOD and Army officials, the Chem-Demil Program relies primarily on guidance and planning documents related to the acquisition process. For example, the former program manager drafted several documents, such as the Program Manager for Chemical Demilitarization’s Management Plan and Acquisition Strategy for the Chemical Demilitarization Program, as the cornerstone of his management approach. Our review of these and other key documents showed that they did not encompass all components of the program or other nonacquisition activities. Some documents had various elements, such as a mission statement, but they were not consistently written. None contained all of the essential elements expected in a comprehensive strategy nor contained aspects needed for an implementation plan, such as an evaluation and corrective action plan. Further, all documents were out of date and did not reflect recent changes to the program. DOD and Army officials stated that the program’s strategy would be articulated in the updated program’s acquisition strategy to be completed by the new Director of CMA. According to the draft acquisition strategy, the focus is to acquire services, systems, and equipment. Again, this approach does not address all components of the Chem-Demil Program, such as CSEPP and ACWA. More importantly, a strategic plan would ensure that all actions support overall program goals as developed by the appropriate senior-level office with oversight responsibility for the program. An implementation plan would define the steps the program would take to accomplish its mission. Further, a strategy document, coupled with an implementation plan, would clarify roles and responsibilities and establish program performance measurements. Together, these documents would provide the foundation for a well-managed program to provide continuity of operations for program officials to follow. The program continues to miss most milestones, following a decade long trend. Nearly all of the incineration sites will miss the 2001 scheduled milestones because of substantial delays during their systematization (equipment testing) or operations (agent destruction) phases. Delays at sites using incineration stem primarily from a number of problems that DOD and the Army have not been able to anticipate or control, such as concerns involving plant safety, difficulties in meeting environmental permitting requirements, public concerns about emergency preparedness plans, and budgeting shortfalls. The neutralization sites have not missed milestones yet but have experienced delays as well. DOD and the Army have not developed an approach to anticipate and address potential problems that could adversely affect program schedules, costs, and safety. Neither DOD nor the Army has adopted a comprehensive risk management approach to mitigate potential problems. As a result, the Chem-Demil Program will have a higher level of risk of missing its schedule milestones and CWC deadlines, incurring rising costs, and unnecessarily prolonging the potential risk to the public associated with the storage of the chemical stockpile. Most incineration sites will miss important milestones established in 2001 due to schedule delays. For example, delays at Anniston, Umatilla, and Pine Bluff have already resulted, or will result, in their missing the 2001 schedule milestones to begin chemical agent destruction operations (operation phase). Johnston Atoll will miss its schedule milestone for shutting down the facility (closure phase). The Tooele site has not missed any milestones since the 2001 schedule was issued; however, the site has undergone substantial delays in destroying its stockpile primarily due to a safety-related incident in July 2002. If additional delays occur at the Tooele site, it could also exceed its next milestone as well. Table 6 shows the status of the incineration sites that will miss 2001 schedule milestones. The delays at the incineration sites have resulted from various long-standing issues, which the Army has not been able to effectively anticipate or control because it does not have a process to identify and mitigate them. An effectively managed program would have an approach, such as lessons learned, to identify and mitigate issues. Although the program now has extensive experience with destroying agents at two sites, the Chem-Demil Programmatic Lessons Learned Program has been shifted to individual contractors from a headquarters centralized effort. In September 2002, we reported on the effectiveness of the centralized lessons learned program and found it to be generally effective, but it should be improved and expanded. By decentralizing the program, it is uncertain how knowledge will be leveraged between sites to avoid or lessen potential delays due to issues that have previously occurred. In addition, program officials told us that they were concerned that lessons from the closure at Johnston Atoll were not being captured and saved for future use at other sites. Many delays have resulted from incidents during operations, environmental permitting, community protection, and funding issues. This continues to be a trend we identified in previous reports on the program. The following examples illustrate some of the issues that have caused delays at incineration sites since 2001: Incidents during operations: Agent destruction operations at Tooele were suspended from July 2002 to March 2003 because of a chemical incident involving a plant worker who came into contact with a nerve agent while performing routine maintenance. Subsequent investigations determined that this event occurred because some procedures related to worker safety were either inadequate or not followed. A corrective action plan, which required the implementation of an improved safety plan, was instituted before operations resumed. Since it resumed operations in March 2003, Tooele has experienced several temporary shutdowns. (These shutdowns are discussed further in app. II.) Environmental permitting: The start of agent destruction operations at Umatilla and Anniston sites has been delayed because of several environmental permitting issues. Delays at the Umatilla site have resulted from several unanticipated engineering changes related to reprogramming software and design changes that required permit modifications. An additional delay occurred at the Umatilla site when the facility was temporarily shut down in October 2002 by state regulators because furnaces were producing an unanticipated high amount of heavy metals during surrogate agent testing. The testing was suspended until a correction could be implemented. Delays at the Anniston site occurred because state environmental regulators did not accept test results for one of the furnaces because the subcontractor did not follow state permit- specified protocols. Community protection: Destruction operations at the Anniston site have been delayed because of concerns about emergency preparedness for the surrounding communities. These concerns included the inadequacy of protection plans for area schools and for special needs residents. Although we reported on this issue in July 1996 and again in August 2001 and a senior DOD official identified it as a key concern in September 2001, the Army was unable to come to a satisfactory resolution with key state stakeholders prior to the planned January 2003 start date. As of June 2003, negotiations were still ongoing between the Army and key public officials to determine when destruction operations could begin. Funding: Systemization and closure activities were delayed at Pine Bluff and Johnston Atoll sites, respectively, because program funds planned for demilitarization were redirected in fiscal year 2002 by DOD to pay for $40.5 million for additional community protection equipment for Anniston. This was an unfunded budget expense, and the Army reduced funds for the Pine Bluff site by $14.9 million, contributing to construction and systemization milestones slipping 1 year. The Pine Bluff site was selected because the loss of funding would not delay the projected start of operations during that fiscal year. Program officials told us that the total program cost of this schedule slip would ultimately be $90 million. Additionally, funds were reduced for the Johnston Atoll site by $25.1 million because it was in closure. According to an Army official, delays increase program costs by approximately $250,000 to $300,000 a day or about $10 million per month. Since 2001, delays have caused cost increases of $256 million at the incineration sites shown in table 7. Due to the delays, the Army is in the process of developing new milestones that would extend beyond those adopted in 2001. According to an Army official, the program will use events that have occurred since 2001 to present new cost estimates to DOD in preparation for the fiscal year 2005 budget submission. Program officials told us that they estimate costs have already increased $1.2 billion. This estimated increase is likely to rise further as additional factors are considered. The two bulk-agent only sites, Aberdeen and Newport, have experienced delays but have not breeched their milestones. The schedules were revised in response to concerns about the continued storage of the chemical stockpile after the events of September 11, 2001. In 2002, DOD approved the use of a modified process that will accelerate the rate of destruction at these two sites. For example, the Army estimates that the modified process will reduce the length of time needed to complete destruction of the blister agent stockpile at Aberdeen from 20 months to 6 months. The Army estimates that this reduction, along with other changes, such as the off-site shipping of a waste byproduct, will reduce the scheduled end of operations by 5 years, from 2008 to 2003. Similarly, projections for agent destruction operations at Newport were reduced from 20 months to 7 months, and the destruction end date moved up from 2009 to 2004. While the Aberdeen site did begin destruction operations, as of June 2003, it had only achieved a peak rate of 2 containers per day, which is far less than the projected peak daily rate of 12, and had experienced unanticipated problems removing residual agent from the containers. After 2 months of processing, Army officials said it had initially processed 57 of the 1,815 containers in Aberdeen’s stockpile and will have to do additional processing of these containers because of a higher amount of unanticipated hardened agent. Even if the peak daily rate of 12 is achieved, the site will not meet the October 2003 Army estimate. At the Newport site, construction problems will delay the start of operations, missing the program manager’s October 2003 estimate for starting agent destruction operations. Another possible impediment to starting operations is the program’s efforts to treat the waste byproduct at a potential off-site disposal facility in Ohio. These efforts have met resistance from some community leaders and residents near the potential disposal site. If the Army is unable to use an off-site facility, the disposal may have to be done on site, requiring the construction of a waste byproduct treatment facility, further causing delays and increasing costs. Schedule milestones were not adopted for the Pueblo and Blue Grass sites in the 2001 schedule because DOD had not selected a destruction technology. Subsequently, DOD selected destruction technologies for these sites; however, these decisions were made several months beyond the dates estimated in 2001. For example, while program officials indicated that the technology decision for the Kentucky site would be made by September 2002, the decision was not made until February 2003. Significantly, DOD announced initial schedule milestones for these two sites that extended beyond the extended April 2012 deadline of the CWC. According to DOD officials, these schedules are preliminary and will be reevaluated after the selected contractors complete their initial design of the facilities. Plans for these sites are immature, and changes are likely to occur as they move closer to the operations phase still at least several years away. DOD and the Army have not implemented a comprehensive risk management approach that would proactively anticipate and influence issues that could adversely affect the program’s progress. The program manager’s office drafted a risk management plan in June 2000, but the plan has not been formally approved or implemented. According to program officials, a prior program official drafted the plan and subsequent officials did not approve or further develop the plan. The draft plan noted that DOD’s acquisition rules require program managers to establish a risk management plan to identify and control risk related to performance, cost, and schedule. Such a plan would allow managers to systematically identify, analyze, and influence the risk factors and could help keep the program within its schedule and cost estimates. DOD and Army officials have given several reasons for not having an overall risk management plan. A DOD official indicated that the approach that has been used to address program problems has been crisis management, which has forced DOD to react to issues rather than control them. The deputy program manager stated that the program’s focus has been on managing individual sites by implementing initiatives to improve contractor performance as it relates to safety, schedule, and cost. The official also said that establishing a formal, integrated risk management plan has not been a priority. However, an official from the program manager’s office said the infrastructure is in place to finalize an integrated risk management plan by October 2003, which coincides with the date CMA takes over leadership of the program. However, due to the transition that the organization is undergoing, the status of this effort is uncertain. The Army defines its risk management approach as a process for identifying and addressing internal and external issues that may have a negative impact on the program’s progress. A risk management approach has five basic steps, which assist program leaders in effective decision making for better program outcomes. Simply stated, the first step is to identify those issues that pose a risk to the program. For example, a problem in environmental permitting can significantly delay the program schedule. The second step is to analyze the risks identified and prioritize the risks using established criteria. The third step is to create a plan for action to mitigate the prioritized risks in some order of importance. The fourth step is to track and validate the actions taken. The last step is to review and monitor the outcomes of the actions taken to ensure their effectiveness. Additional remedies may be needed if actions are not successful or the risks have changed. Risk management is a continuous, dynamic process and must become a regular part of the leadership decision process. Without developing such an approach, the Chem-Demil Program will continue to manage by addressing issues as they arise and not by developing strategies or contingency plans to meet program issues. As the program complexity increases with new technologies and more active sites, a comprehensive risk management approach, as the acquisition regulations require, would facilitate program success and help control costs. Such a proactive approach would allow the program to systematically identify, analyze, and manage the risk factors that could hamper its efforts to destroy the chemical stockpile and help keep it within its schedule and cost estimates. For more than a decade, the Chem-Demil Program has struggled to meet schedule milestones—and control the enormous costs—for destroying the nation’s chemical weapons stockpile. The program will also miss future CWC deadlines. Despite several reorganizations of its complex structure, the program continues to flounder. Program leadership at both the oversight and the program manager levels has shifted frequently, contributing to the program’s continued instability, ineffective decision making, and weak accountability. The repeated realignments of the program have done little to resolve its awkward, hydra-like structure in which roles and responsibilities continue to be poorly defined, multiple lines of authority exist, and coordination between various entities is poor. These shifts and realignments have taken place without the benefit of a comprehensive strategy and an implementation plan that could help the program clearly define its mission and begin working toward its goals effectively. If the program had these key pillars, such as a strategy to guide it from its inception and an implementation plan to track performance, it would be in a better position to achieve desired outcomes. The program will have a low probability of achieving its principal goal of destroying the nation’s chemical weapons stockpile in a safe manner within the 2001 schedule unless DOD and Army leadership take immediate action to clearly define roles and responsibilities throughout the program and implement an overarching strategic plan. The Chem-Demil Program is entering a crucial period as more of its sites move into the operations phase. As this occurs, the program faces potentially greater challenges than it has already encountered, including the possibilities of growing community resistance, unanticipated technical problems, and serious site incidents. Unless program leadership is proactive in identifying potential internal and external issues and preparing for them, or in reducing the chances that they will occur, the program remains at great risk of failing to meet its scheduled milestones and the deadlines set by the CWC. These problems, and subsequent delays, are likely to continue plaguing the program unless it is able to incorporate a comprehensive risk management system into its daily routine. Such a proactive approach would allow the program to systematically identify, analyze, and manage the risk factors that could hamper its efforts to destroy the chemical stockpile and help keep it within its schedule and cost estimates. Without the advantage of having a risk management tool, the program will continue to be paralyzed by delays caused by unanticipated issues, resulting in spiraling program costs and missed deadlines that prolong the dangers of the chemical weapons stockpile to the American public. We recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics, in conjunction with the Secretary of the Army, to develop an overall strategy and implementation plan for the chemical demilitarization program that would: articulate a program mission statement, identify the program’s long-term goals and objectives, delineate the roles and responsibilities of all DOD and Army offices, establish near-term performance measures, and implement a risk management approach that anticipates and influences internal and external factors that could adversely impact program performance. In written comments on a draft of this report, DOD concurred with our recommendations. In concurring with our recommendation to develop an overall strategy and implementation plan, DOD stated that it is in the initial stages of developing such a plan and estimates that it will be completed in fiscal year 2004. In concurring with our recommendation to implement a risk management approach, DOD stated that the CMA will review the progress of an evaluation of several components of its risk management approach within the next 120 days. At that time, DOD will evaluate the outcome of this review and determine any appropriate action. We believe these actions should improve program performance provided DOD’s plan incorporates a clearly articulated mission statement, long-term goals, well-delineated assignment of roles and responsibilities, and near- term performance measures and the Army’s review of its risk management approach focuses on anticipating and influencing internal and external factors that could adversely impact the Chem-Demil Program. DOD’s comments are printed in appendix III. DOD also provided technical comments that we incorporated where appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology and Logistics; the Secretary of the Army; and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. For any questions regarding this report, please contact me at (512) 512-6020. Key contributors to this report were Donald Snyder, Rodell Anderson, Bonita Oden, John Buehler, Pam Valentine, Steve Boyles, Nancy Benco, and Charles Perdue. This report focuses on the Chemical Demilitarization (Chem-Demil) Stockpile Program, one of the components of the Chem-Demil program. Other components, such as the Chemical Stockpile Emergency Preparedness Program, were only discussed to determine their effects on the destruction schedule. To determine if recent changes in the stockpile program’s management and oversight have been successful in improving program progress, we interviewed numerous officials and reviewed various documents. Through a review of previous and current organizational charts, we noted a number of changes in the program from 1986 to the present. We interviewed Department of Defense (DOD) and Army officials to determine what effect organizational changes and management initiatives had on the program and to determine if a strategic plan had been developed to manage the program. We identified organizational changes between DOD and the Army, determined the rationale for changes, and ascertained the effect of these changes on program performance. We reviewed Defense Acquisition System directives to determine the roles and responsibilities of DOD and the Army in managing the Chemical Demilitarization Program. We assessed Chem-Demil Program’s Acquisition Strategy and Management and Program Performance plans to identify elements of a strategic plan and evaluated and compared them to the general tenets and management principles embraced by the Government Performance and Results Act. Additionally, we interviewed Office of Management and Budget officials to discuss their assessment of the program’s performance and its adherence to a results-oriented management approach and reviewed DOD directives and regulations to determine the criteria for strategic planning. To determine the progress that DOD and the Army have made in meeting revised 2001 cost and schedule estimates and Chemical Weapons Convention (CWC) deadlines, we interviewed relevant program officials and reviewed a number of documents. We reviewed the Army’s current program office estimates to destroy the chemical weapons stockpile and weekly and monthly destruction schedules to understand how sites will perform and synchronize activities to meet milestones. We interviewed DOD’s Cost Analysis Improvement Group to determine how DOD developed estimates for the 2001 milestone schedules for each site. However, we did not independently evaluate the reliability of the methodology the Cost Analysis Improvement Group used to develop its estimate. Further, we interviewed program officials to determine the status of the destruction process at incineration and neutralization sites and the impact of delays on schedule and cost. We reviewed Selected Acquisition Reports and Acquisition Program Baselines to identify the increase in program cost estimates in 1998 and 2001 and to determine the relationship between changes to schedule milestones and increased program cost. Our analysis identified the effect that schedule delays would have on schedule milestones at incineration and neutralization sites. Additionally, the analysis also identified types of schedule delays and the impact on program cost. Through interviews with program officials, we discussed the status of factors that increase program life-cycle cost estimates. We examined the Chem-Demil Program’s draft risk management plans to determine if the Army had developed a comprehensive risk management approach to address potential problems that could adversely affect program schedules, cost, and safety. Through an analysis of other risk management plans, we identified elements of a risk management process. We reviewed CWC documents to determine deadlines for the destruction of the chemical weapons stockpile. We interviewed program officials to discuss the potential implications of not meeting interim milestones and CWC deadlines. During the review, we visited and obtained information from the Office of the Secretary of Defense, the Assistant Secretaries of the Army (Installations and Environment) and (Acquisition, Logistics, and Technology); the Office of Management and Budget, the Department of State, the Federal Emergency Management Agency, and the DOD Inspector General in Washington, D.C. and met with the Director of Chemical Materials Agency and the Program Managers for Chemical Demilitarization and Assembled Chemical Weapons Assessment in Edgewood, Maryland. We also met project managers, site project managers, state environmental offices, and contractors associated with disposal sites in Aberdeen, Maryland; Anniston, Alabama; Umatilla, Oregon; and Pine Bluff, Arkansas. We also interviewed Federal Emergency Management Agency officials concerning funding of emergency preparedness program activities. We conducted our review from August 2002 to June 2003 in accordance with generally accepted government auditing standards. When developing schedules, the Army divides the demilitarization process into five major phases. The five major phases are facility design, construction, systemization, operations, and closure. Some activities of one phase may overlap the preceding phase. The nine sites are at different phases of the process. During the design phase, the Army obtains the required environmental permits. The permits are required to comply with federal, state, and local environmental laws and regulations to build and operate chemical disposal facilities. The permits specify construction parameters and establish operations guidelines and emission limitations. Subsequent engineering changes to the facility are incorporated into the permits through formal permit modification procedures. During this phase, the Army originally solicited contract proposals from systems contractors to build, and operate, the chemical demilitarization facility and selected a systems contractor. Now, the Army uses a design/build approach, whereby the contractor completes both phases. The Army originally provided the systems contractors with the design for the incineration facilities; however, systems contractors developed the facility design for the neutralization facilities. During the construction phase, the Army, with the contractor’s input, develops a master project schedule that identifies all major project tasks and milestones associated with site design, construction, systemization, operations, and closure. For each phase in the master project schedule, the contractor develops detailed weekly schedules to identify and sequence the activities necessary to meet contract milestones. Army site project managers review and approve the detailed schedules to monitor the systems contractor’s performance. After developing the schedules, the contractor builds a disposal site and acquires, installs, and integrates the necessary equipment to destroy the stockpile and begins hiring, training, and certifying operations staff. During systemization, the systems contractor also prepares and executes a systemization implementation plan, which describes how the contractor will ensure the site is prepared to conduct agent operations. The contractor begins executing the implementation plan by testing system components. The contractor then tests individual systems to identify and correct any equipment flaws. After systems testing, the contractor conducts integrated operations tests. For example, the contractor uses simulated munitions to test the rocket processing line from receipt of the munitions through incineration. Army staff observe and approve key elements of each integrated operations test, which allows the contractor to continue the systemization process. Once the Army approves the integrated operations test, the contractor tests the system by conducting mini and surrogate trial burns. During minitrial burns, the contractor adds measured amounts of metals to a surrogate material to demonstrate the system’s emissions will not exceed allowable rates. In conducting surrogate trial burns, the contractor destroys nonagent compounds similar in makeup to the agents to be destroyed at the site. By using surrogate agents, the contractor tests destruction techniques without threatening people or the environment. Both the minitrial burn test results and the surrogate trial burn test results are submitted to environmental regulators for review and approval. When the environmental regulators approve the surrogate trial burns, the contractor conducts an Operational Readiness Review to validate standard operating procedures and to verify the proficiency of the workforce. During the Operational Readiness Review, the workforce demonstrates knowledge of operating policies and procedures by destroying simulated munitions. After systemization, the contractor begins the operations phase; that is, the destruction of chemical munitions. The operations phase is when weapons and agents are destroyed. Weapons are destroyed by campaign, which is the complete destruction of like chemical weapons at a given site. Operations for incineration and alternative technologies differ. The following examples pertain to an incineration site. In its first campaign, Umatilla plans to destroy its stockpile of M55 rockets filled with one type of nerve agent. Then a second campaign is planned to destroy its stockpile of M55 rockets filled with another type of nerve agent. After each campaign, the site must be reconfigured. The Army refers to this process as an agent changeover. During the changeover, the contractor decontaminates the site of any prior nerve agent residue. The contractor then adjusts the monitoring, sampling, and laboratory equipment to test for the next nerve agent. The contractor also validates the operating procedures for the second agent destruction process. Some operating procedures may be rewritten because the processing rates among chemical agents differ. Although the operations staff have been trained and certified on specific equipment, the staff are re-trained on the operating parameters of processing VX agent. In the third and forth campaigns at Umatilla, the contractor plans to destroy 8-inch VX projectiles and 155-millimeter projectiles, respectively. Because the third campaign involves a different weapon than the second campaign (i.e., from rockets in the second campaign to projectiles in the third campaign), the contractor will replace equipment during the changeover. For example, the machine that disassembles rockets will be replaced with a machine that disassembles projectiles. Additionally, a changeover may require certain processes to be bypassed. For instance, if a changeover involved changing processes from weapons with explosives to weapons without explosives, the explosives removal equipment and deactivation furnace would be bypassed. For the changeover to the fourth campaign at Umatilla, the contractor will adjust equipment to handle differences in weapon size. For example, the contractor will adjust the conveyor system to accommodate the 155-millimeter projectiles. The contractor also will change the location of monitoring equipment. After destruction of the stockpile, the systems contractor begins closing the site. During the closure phase, the contractor decontaminates and disassembles the remaining systems, structures, and components used during the demilitarization effort, and the contractor performs any other procedures required by state environmental regulations or permits. The contractor removes, disassembles, decontaminates, and destroys the equipment, including ancillary equipment such as pipes, valves, and switches. The contractor also decontaminates buildings by washing and scrubbing concrete surfaces. Additionally, the contractor removes and destroys the surface concrete from the walls, ceilings, and floors. With the exception of the Umatilla site, the structures will remain standing. Any waste generated during the decontamination process is destroyed. The Army’s nine chemical demilitarization sites are in different phases of the demilitarization process. The Johnston Atoll site completed the destruction of its stockpile and closure is almost complete. The sites at Tooele, Utah, and Aberdeen, Maryland, are in the operations phase, each using different technologies, to destroy chemical agent and munitions. The remaining six facilities are in systems design, construction and/or systemization. Table 8 provides details on the status of each of the nine chemical demilitarization sites. Chemical Weapons: Lessons Learned Program Generally Effective but Could Be Improved and Expanded. GAO-02-890. Washington, D.C.: September 10, 2002. Chemical Weapons: FEMA and Army Must Be Proactive in Preparing States for Emergencies. GAO-01-850. Washington, D.C.: August 13, 2001. Chemical Weapons Disposal: Improvements Needed in Program Accountability and Financial Management. GAO/NSIAD-00-80. Washington, D.C.: May 8, 2000. Chemical Weapons: DOD Does Not Have a Strategy to Address Low-Level Exposures. GAO/NSIAD-98-228. Washington, D.C.: September 23, 1998. Chemical Weapons Stockpile: Changes Needed in the Management of the Emergency Preparedness Program. GAO/NSIAD-97-91. Washington, D.C.: June 11, 1997. Chemical Weapons and Materiel: Key Factors Affecting Disposal Costs and Schedule. GAO/T-NSIAD-97-118. Washington, D.C.: March 11, 1997. Chemical Weapons Stockpile: Emergency Preparedness in Alabama Is Hampered by Management Weaknesses. GAO/NSIAD-96-150. Washington, D.C.: July 23, 1996. Chemical Weapons Disposal: Issues Related to DOD’s Management. GAO/T-NSIAD-95-185. Washington, D.C.: July 13, 1995. Chemical Weapons: Army’s Emergency Preparedness Program Has Financial Management Weaknesses. GAO/NSIAD-95-94. Washington, D.C.: March 15, 1995. Chemical Stockpile Disposal Program Review. GAO/NSIAD-95-66R. Washington, D.C.: January 12, 1995. Chemical Weapons: Stability of the U.S. Stockpile. GAO/NSIAD-95-67. Washington, D.C.: December 22, 1994. Chemical Weapons Disposal: Plans for Nonstockpile Chemical Warfare Materiel Can Be Improved. GAO/NSIAD-95-55. Washington, D.C.: December 20, 1994. Chemical Weapons: Issues Involving Destruction Technologies. GAO/T-NSIAD-94-159. Washington, D.C.: April 26, 1994. Chemical Weapons Destruction: Advantages and Disadvantages of Alternatives to Destruction. GAO/NSIAD-94-123. Washington, D.C.: March 18, 1994. Arms Control: Status of U.S.-Russian Agreements and the Chemical Weapons Convention. GAO/NSIAD-94-136. Washington, D.C.: March 15, 1994. Chemical Weapon Stockpile: Army’s Emergency Preparedness Program Has Been Slow to Achieve Results. GAO/NSIAD-94-91. Washington, D.C.: February 22, 1994. Chemical Weapons Storage: Communities Are Not Prepared to Respond to Emergencies. GAO/T-NSIAD-93-18. Washington, D.C.: July 16, 1993. Chemical Weapons Destruction: Issues Affecting Program Cost, Schedule, and Performance. GAO/NSIAD-93-50. Washington, D.C.: January 21, 1993. Chemical Weapons Destruction: Issues Related to Environmental Permitting and Testing Experience. GAO/T-NSIAD-92-43. Washington, D.C.: June 16, 1992. Chemical Weapons Disposal. GAO/NSIAD-92-219R. Washington, D.C.: May 14, 1992. Chemical Weapons: Stockpile Destruction Cost Growth and Schedule Slippages Are Likely to Continue. GAO/NSIAD-92-18. Washington, D.C.: November 20, 1991. Chemical Weapons: Physical Security for the U.S. Chemical Stockpile. GAO/NSIAD-91-200. Washington, D.C.: May 15, 1991. Chemical Warfare: DOD’s Effort to Remove U.S. Chemical Weapons From Germany. GAO/NSIAD-91-105. Washington, D.C.: February 13, 1991. Chemical Weapons: Status of the Army’s M687 Binary Program. GAO/NSIAD-90-295. Washington, D.C.: September 28, 1990. Chemical Weapons: Stockpile Destruction Delayed at the Army’s Prototype Disposal Facility. GAO/NSIAD-90-222. Washington, D.C.: July 30, 1990. Chemical Weapons: Obstacles to the Army’s Plan to Destroy Obsolete U.S. Stockpile. GAO/NSIAD-90-155. Washington, D.C.: May 24, 1990.
Congress expressed concerns about the Chemical Demilitarization Program cost and schedule, and its management structure. In 2001, the program underwent a major reorganization. Following a decade long trend of missed schedule milestones, in September 2001, the Department of Defense (DOD) revised the schedule, which extended planned milestones and increased program cost estimates beyond the 1998 estimate of $15 billion to $24 billion. GAO was asked to (1) examine the effect that recent organization changes have had on program performance and (2) assess the progress DOD and the Army have made in meeting the revised 2001 cost and schedule and Chemical Weapons Convention (CWC) deadlines. The Chemical Demilitarization Program remains in turmoil because a number of long-standing leadership, organizational, and strategic planning issues remain unresolved. The program lacks stable leadership at the upper management levels. For example, the program has had frequent turnover in the leadership providing oversight. Further, recent reorganizations have done little to reduce the complex and fragmented organization of the program. As a result, roles and responsibilities are often unclear and program actions are not always coordinated. Finally, the absence of a comprehensive strategy leaves the program without a clear road map and methods to monitor program performance. Without these key elements, DOD and the Army have no assurance of meeting their goal to destroy the chemical stockpile in a safe and timely manner, and within cost estimates. DOD and the Army have already missed several 2001 milestones and exceeded cost estimates; the Army has raised the program cost estimates by $1.2 billion, with other factors still to be considered. Almost all of the incineration sites will miss the 2001 milestones because of schedule delays due to environmental, safety, community relations, and funding issues. Although neutralization sites have not missed milestones, they have had delays. DOD and the Army have not developed an approach to anticipate and influence issues that could adversely impact program schedules, cost, and safety. Unless DOD and the Army adopt a risk management approach, the program remains at great risk of missing milestones and CWC deadlines. It will also likely incur rising costs and prolong the public's exposure to the chemical stockpile.
You are an expert at summarizing long articles. Proceed to summarize the following text: Army maintenance depots and arsenals were established to support Army fighting units by providing repair and manufacturing capability, in concert with the private sector, to meet peacetime and contingency operational requirements. In recent years, the Army has taken steps to operate these facilities in a more business-like manner, including generating revenues from their output to support their operations. The number of these facilities has been reduced and the size of their workloads and staffs have declined significantly. This reflects the downsizing that began in the late 1980s following the end of the Cold War and the trend toward greater reliance on the private sector to meet many of the Army’s needs. The Army relies on both the public and the private sectors to meet its maintenance, overhaul, repair, and ordnance manufacturing needs. Army depots and arsenals have a long history of service, and they are subject to various legislative provisions that affect the work they do as well as how it is allocated between the public and private sectors. Army maintenance depots were established between 1941 and 1961 to support overhauls, repairs, and upgrades to nearly all of the Army’s ground and air combat systems. Before the depots were established, some maintenance and repair work was performed at the Army’s supply depots and arsenals and some was performed by the private sector. However, before 1941 much of the equipment in use was either repaired in the field or discarded. Depot workload can be classified into two major categories: end items and reparable secondary items. End items are the Army’s ground combat systems, communications systems, and helicopters. Secondary items include various assemblies and subassemblies of major end items, including helicopter rotor blades, circuit cards, pumps, and transmissions. Several depots, particularly Tobyhanna, also do some manufacturing, but generally for small quantities of individual items needed in support of depot overhaul and repair programs. In 1976, 10 depots performed depot maintenance in the continental United States. By 1988 that number had been reduced to eight as a result of downsizing following the Vietnam War. Between 1989 and 1995, Base Realignment and Closure (BRAC) Commission decisions resulted in the closure of three more depots and the ongoing realignment of two others. At the end of fiscal year 1998, the 5 Army depots employed about 11,200 civilians, a 48-percent reduction from the 21,500 in fiscal year 1989. In fiscal year 1998, the depots received revenues of about $1.4 billion. Since the mid-1980s, depots have generally not been able to hire new government civilian employees because of personnel ceilings and, therefore, have used contractor personnel to supplement their workforce as necessary to meet workload requirements. Like the other services, operations of the Army depots are guided by legislative requirements. Section 2464 of title 10 provides for a Department of Defense (DOD)-maintained core logistics capability that is to be government-owned and -operated and that is sufficient to ensure the technical competence and resources necessary for an effective and timely response to a mobilization or other national emergency. Section 2466 prohibits the use of more than 50 percent of the funds made available in a fiscal year for depot-level maintenance and repair to contract for the performance of the work by nonfederal personnel. Section 2460 defines depot-level maintenance and repair. Section 2469 provides that DOD-performed depot-level maintenance and repair workloads valued at $3 million or more cannot be changed to contractor performance without the use of competitive procedures for competitions among public and private sector entities. A related provision in section 2470 provides that depot-level activities are eligible to compete for depot-level maintenance and repair workloads. The Army’s two remaining manufacturing arsenals were established in the 1800s to provide a primary manufacturing source for the military’s guns and other war-fighting equipment. Subsequently, in 1920, the Congress enacted the Arsenal Act, codified in its current form at 10 U.S.C. 4532. It requires that the Army have its supplies made in U.S. factories or arsenals provided they can produce the supplies on an economic basis. It also provides that the Secretary of the Army may abolish an arsenal considered unnecessary. It appears that the act was intended to keep government-owned arsenals from becoming idle and to preserve their existing capabilities to the extent the capabilities are considered necessary for the national defense. The Army implements the act by determining, prior to issuing a solicitation to industry, whether it is more economical to make a particular item using the manufacturing capacity of a U.S. factory or arsenal or to buy the item from a private sector source. Only if the Army decides to acquire the item from the private sector is a solicitation issued. As the domestic arms industry has developed, the Army has acquired from industry a greater portion of the supplies that in earlier years had been furnished by arsenals. Following World War II, the Army operated six major manufacturing arsenals. Since 1977, only two remain in operation.Table 1.1 provides information on the six post-World War II arsenals, including operating periods and major product lines. Today the two arsenals manufacture or remanufacture a variety of weapons and weapon component parts, including towed howitzers, gun mounts, and gun tubes. At the end of fiscal year 1998, the Rock Island and Watervliet facilities employed a total of about 2,430 civilians, a 46-percent reduction from a total of about 4,500 employees at the end of fiscal year 1989. In fiscal year 1998, the two arsenals received about $199 million in revenues. Funding for day-to-day operations of Army depots and arsenals is provided primarily through the Army Working Capital Fund. The services reimburse the working capital fund with revenues earned by the depots and arsenals for completed work based on hourly labor rates that are intended to recover operating costs, including material, labor, and overhead expenses. While Army depots and arsenals are primarily focused on providing the fighting forces with required equipment to support readiness objectives, the industrial fund was intended to optimize productivity and operational efficiencies. Army industrial activities are supposed to operate in a business-like manner, but they are expected to break even and to generate neither profits nor losses. Nonetheless, these military facilities may sometimes find it difficult to follow business like practices. For example, Army requirements may make it necessary to maintain capability to perform certain industrial operations even though it would not seem economical—from a business perspective—to do so. Systems with older technology must be maintained even though acquiring repair parts becomes more difficult and expensive. If military customers need products that are inefficient to produce, the depots and arsenals must produce them anyway. To compensate the depots and arsenals for the cost of maintaining underutilized capacity that might be needed in the future, these activities receive supplemental funding in the operations and maintenance appropriation under an account entitled “underutilized plant capacity.” As shown in table 1.2, funding of this account has been reduced in recent years. Army officials stated that the reduction was made to fund other higher priority programs; however, they stated that in future years, this trend would likely be reversed. The Army Materiel Command (AMC) and its subordinate commands hold semiannual workload conferences to review, analyze, document, and assign work to the five depots. In contrast, the arsenals actively market their capabilities to DOD program management offices to identify potential customers. Despite differences in how they obtain their work, depots and arsenals are alike in how they set rates for their work. The process they use begins about 18 months prior to the start of the fiscal year in which maintenance and manufacturing will be performed. Depot and arsenal managers propose hourly rates to recover operating costs based on the anticipated level of future workload requirements, but rates are ultimately determined at the Department of Army and DOD levels. Rate setting is an iterative process that begins with the industrial activities and the Industrial Operations Command (IOC), a subordinate command under AMC. After they reach agreement, the proposed rates, which are included in consolidated depot and arsenal budgets, are forwarded for review up the chain of command. These commands frequently revise the rates initially requested by IOC based on past performance and other evolving workload and staffing information. When rates are reduced, the industrial activities must find ways to cut costs or increase workload to end the year with the desired financial outcome, which is usually to have a cumulative zero net operating result. However, even if the proposed rates are approved without modification, the performing industrial activity can end the year in better or worse financial shape than originally anticipated, depending on whether or not actual costs and workload are as anticipated. This can necessitate a rate increase in a subsequent year to offset the losses of a prior year, or a rate reduction to offset profits. Depots and arsenals employ direct labor workers who charge time to finite job taskings, earning revenue for the business. In addition, they employ a number of indirect workers, such as shop supervisors and parts expediters, whose time cannot be related to a finite job order but nevertheless support the depot maintenance and arsenal manufacturing process. Likewise, the industrial facilities also employ a variety of general and administrative overhead personnel such as production managers, technical specialists, financial managers, personnel officers, logisticians, contracting officers, computer programmers, and computer operators.While the time spent by these two categories of overhead personnel is difficult to relate to a finite job order, their costs are nevertheless reflected in the overall rates charged by the industrial activities. AMC is responsible for management control and oversight of the Army’s industrial facilities. The Army’s IOC—a subordinate command under AMC—had management responsibility for both arsenals and depots. That began to change in November 1997, when under a pilot program, management responsibility for workloading and overseeing work at the Tobyhanna Army Depot was transferred to the Communications-Electronics Command, the depot’s major customer. The Army completed the transfer of operational command and control for the Tobyhanna depot in October 1998 and plans to complete transfer of management responsibilities for the other depots in October 1999. Each depot will be aligned with its major customer, which is also the coordinating inventory control point for the depot’s products. Table 1.3 summarizes the upcoming management relationship for each Army depot and lists its principal workloads. Upon completion of the transfers of management responsibilities for depots, the IOC workforce will be reduced by about 280 positions out of a current staff level of about 1,400 personnel at the end of fiscal year 1998. The gaining commands will not get additional manpower positions. AMC is assuming that the gaining commands will be able to take on these added responsibilities with no increase in staff. These reductions are in addition to the 1,720 personnel reductions that IOC previously planned to make within the individual depots during fiscal years 1998 and 1999—many of which were put on hold because of section 364 of the National Defense Authorization Act for Fiscal Year 1998. Section 364 prohibits the Army from initiating a reduction in force at five Army depots participating in the demonstration and testing of the Army Workload and Performance System (AWPS) until after the Secretary of the Army certifies to the Congress that AWPS is fully operational. It exempts reductions undertaken to implement 1995 BRAC decisions. Current plans call for the arsenals to remain under the management and control of IOC. Also, the arsenals are not currently precluded by section 364 from reducing their workforce. Accordingly, to adjust the workforce to more accurately reflect the current workload, the two arsenals are in the process of reducing their workforce by a total of over 300 positions out of 2,700. Figure 1.1 shows the locations of the Army’s industrial facilities and each major command to be responsible for management control and oversight. In recent years, several audit reports have highlighted the Army’s inability to support its personnel requirements on the basis of analytically based workload forecasts. For example, the Army Audit Agency reported in 1992 and 1994 that the Army did not know its workload and thus could not justify personnel needs or budgets. In several more recent audits, the Army Audit Agency recommended declaration of a material weakness in relating personnel requirements to workload and budget. In DOD’s fiscal year 1997 Annual Statement of Assurance on Management Controls, DOD noted a material weakness in its manpower requirements determination system. It noted that the current system for manpower requirements determination lacked the ability to link workload, manpower requirements, and dollars. Thus, the Army was not capable of rationally predicting future civilian manpower requirements based on workload. As a result, managers at all levels did not have the information they needed to improve work performance, improve organizational efficiency, and determine and support staffing needs, manpower budgets, and personnel reductions. In response to concerns about its workforce planning, the Army has sought to implement a two-pronged approach to evaluating its workforce requirements. This includes implementing a 12-step methodology analysis and developing an automated system for depots, arsenals, and ammunition plants that is referred to as AWPS. In February 1998, we reported that the Army had developed this corrective action plan to resolve its material weakness but that it might have difficulty achieving the expected completion date. The 12-step methodology, adopted by the Army in April 1996, is a largely manual process that provides a snapshot of personnel requirements designed to link personnel requirements to workload at various headquarters commands and organizations. The methodology includes analyses of missions and functions, opportunities to improve processes, workload drivers, workforce options (including civilian versus using military personnel and contracting versus using in-house personnel), and organizational structure. It also looks for ways to consolidate and create more effective use of indirect and overhead personnel assigned to Army industrial activities. Figure 1.2 shows the components of the 12-step method. The development of AWPS resulted from an Army effort initiated in July 1995 to have a contractor survey leading edge commercial and public sector entities to identify their “best practices” for determining personnel requirements based on a detailed analysis of work to be performed. The contractor concluded that a computer-based system developed by the Naval Sea Logistics Center, Pacific, for use in naval shipyards provided the greatest potential for documenting personnel requirements at Army industrial activities. Consequently, in March 1996, the Army provided funding to a support contractor and the Navy to develop and implement a modified version of the Navy’s computer-based process at Army maintenance depots to support the maintenance function. The AWPS system is designed to facilitate evaluation of what-if questions, including workload and personnel requirements analyses. The evolving system currently consists of three modules—performance measurement control, workload forecasting, and workforce forecasting—to integrate workload and workforce information to determine personnel requirements for various levels of work. The system provides two primary management information products—information concerning the production status on specific project orders and information concerning workload forecasts and related workforce requirements. The Chairman of the Subcommittee on Readiness, House Committee on National Security, asked us to examine selected workforce issues pertaining to the Army’s depots, focusing particularly on the Corpus Christi depot, where significant difficulties were encountered in implementing a planned personnel reduction during 1997. Subsequently, Congressman Lane Evans requested that we examine workforce issues at the Army’s manufacturing arsenals. Accordingly, this report focuses on (1) whether the Army had a sound basis for personnel reductions planned at its depots during a 2-year period ending in fiscal year 1999; (2) progress the Army has made in developing an automated system for making depot staffing decisions based on workload estimates; (3) other factors that may adversely impact the Army’s ability to improve the cost-effectiveness of its depot maintenance programs and operations; and (4) workload trends, staffing, and productivity issues at the Army’s manufacturing arsenals. This is one of a series of reports (see related GAO products at the end of this report) addressing DOD’s industrial policies, outsourcing plans, activity closures, and the allocation of industrial work between the public and private sectors. To determine whether the Army had a sound basis for personnel reductions, we reviewed the rationale, support, status, and resulting impact of the Army’s proposal to reduce staffing at its depots. We interviewed resource management personnel at Army headquarters, Army Materiel Command, the Army Industrial Operations Command, the Army Aviation and Missile Command, and the Corpus Christi Army Depot where we obtained information on the Army’s reasons for proposed staffing reductions, and reviewed documentation supporting the Army’s proposed staff reduction plan. We discussed staff reduction and related issues with Army Audit Agency officials. To ascertain the Army’s progress in developing workload-based staffing estimates, we met with officials from the Naval Sea Logistics Center, Pacific, which is modifying previously existing Navy programs to fit the Army depot and arsenal scenarios. We also interviewed key Norfolk Naval Shipyard and Navy headquarters personnel who have used the Navy’s automated workforce planning system. We visited Corpus Christi, Letterkenny, and Tobyhanna depots to obtain information on the implementation of AWPS and to observe depot employees’ use of AWPS-generated data. We also reviewed the results of the Army Audit Agency’s audit work regarding the implementation of personnel downsizing and regarding the development and testing of the AWPS system. To identify factors that may adversely impact the Army’s ability to improve the cost-effectiveness of its depot maintenance operations, we analyzed financial and productivity data for each of the depots and discussed emerging issues with Headquarters IOC, depot, and commodity command officials. We also visited the Corpus Christi, Letterkenny, and Tobyhanna depots to obtain information on various aspects of their operation and management. We visited the Naval Air Systems Command, Patuxent River, Maryland to follow up on Corpus Christi Army Depot problems associated with performing Navy workload. During subsequent depot and arsenal visits, we asked questions about the scheduling of work, parts availability, overtime, movement of personnel, and related topics. We also visited selected Army repair facilities that perform depot-level tasks but are not recognized as traditional depot-level maintenance providers. We also conducted literature and internet searches of appropriate topics. To review workload, staffing, and productivity issues at Army arsenals, we interviewed personnel at the Army Industrial Operations Command, which provides management control and oversight for the manufacturing arsenals. We reviewed back-up documentation supporting proposed staffing reductions and the reasonableness and support assumptions on which staff reduction proposals were based. We visited the two arsenals and met with a variety of key management personnel to discuss and obtain their views on various workload and staffing issues. We performed work at the following activities: Department of Army Headquarters, Washington, D.C. Army Materiel Command, Alexandria, Va. Army Industrial Operations Command, Rock Island, Ill. Army Aviation and Missile Command, Huntsville, Ala. Corpus Christi Army Depot, Corpus Christi, Tex. Letterkenny Army Depot, Chambersburg, Pa. Tobyhanna Army Depot, Tobyhanna, Pa. Rock Island Arsenal, Rock Island, Ill. Watervliet Arsenal, Watervliet, N.Y. Aviation Classification Repair Activities Depot (Army National Guard), Groton, Conn. Fort Campbell, Fort Campbell, Ky. Fort Hood, Killeen, Tex. Management Engineering Activity, Chambersburg, Pa. Naval Air Systems Command, Patuxent River, Md. Naval Sea Systems Command, Arlington, Va. Commander in Chief, U.S. Atlantic Fleet, Norfolk, Va. Norfolk Naval Shipyard, Portsmouth, Va. Army Audit Agency, Alexandria, Va. We conducted our work between September 1997 and August 1998 in accordance with generally accepted government auditing standards and generally relied upon Army provided data. While reviewing AWPS generated data, we noted significant errors, particularly early in the audit, and did not utilize that information other than to note its occurrence. A variety of weaknesses were contained in IOC’s analysis supporting its plan to eliminate about 1,720 depot jobs over a 2-year period ending in fiscal year 1999. Those weaknesses accentuated previously existing concerns about the adequacy of the Army’s workforce planning. The lack of an effective manpower requirements determination process has been an Army declared internal control weakness, for which several corrective actions are in process, including the development and implementation of an automated workload and workforce planning system. An initial attempt to implement the planned reductions at the Corpus Christi Army Depot proved chaotic and resulted in unintended consequences from the termination of direct labor employees who were needed to support depot maintenance production requirements. While the Army was proceeding with efforts to strengthen its workforce planning capabilities during this time, those capabilities were not sufficiently developed to be used to support the IOC’s analysis. The Army has made progress in establishing AWPS—its means for analyzing and documenting personnel requirements for the maintenance function—and is approaching the point of certifying its operational status to the Congress. However, while the current version of the system addresses direct labor requirements, it does not address requirements for overhead personnel—an important issue in the ill-planned 1997 reduction of personnel at the depot in Corpus Christi, Texas. The Army’s plan for reducing the workforce at its depots had a number of weaknesses and did not appear to be consistent with its own policy guidance. Army Regulation 570-4 (Manpower Management: Manpower and Equipment Control) states that staffing levels are to be based on workloads to be performed. However, our work indicates that the Army’s plan for reducing staff levels at its depots was developed primarily in response to affordability concerns and was intended to lower the hourly rates depots charge their customers. The plan was not supported by a detailed comparison of planned workload and related personnel requirements. Army officials stated that incorporation of the 12-step process into AWPS will help the Army address affordability while directly linking manpower to funded workload, assuming that the Army ensures accuracy and reliability of AWPS data input, both by the planners and via the shop floor. In July 1996, as part of its review of proposed rates, AMC headquarters determined that the hourly rates proposed by the Army depots for maintenance work in fiscal year 1998 were generally unaffordable. It concluded that depot customers could not afford to purchase the work they needed. The Army’s depot composite rate for fiscal year 1998 was over 11 percent higher than the composite rate for fiscal year 1996. Table 2.1 provides a comparison of the initial rates requested by each Army depot for fiscal year 1998, the final rates approved for that year by Headquarters AMC and the Army staff, and the percentage difference. AMC Headquarters officials stated that in recent years the depot rates had increased to the point that, in some cases, they were not affordable. IOC officials stated that since they had to reduce the rates quickly, they had little choice but to require staff reductions. Reported personnel costs in fiscal year 1997 comprise about 46 percent, material and supplies about 29 percent, and other miscellaneous costs about 25 percent of depots’ operating costs. As shown in table 2.1, the rate reduction varied by depot. Unlike the other depots where IOC set a lower rate, IOC set the rate at the Red River depot higher than depot officials requested. However, the rate set was still not high enough to cover estimated costs at that depot. An IOC official stated that if the Red River depot had charged its customers based on the estimated costs of operations at that facility, including recovery of previous operating losses, the composite rate would have been over $174 per hour in fiscal year 1998. Having made the decision to reduce the rates through staffing cuts, what remained to be done was to develop a depot staff reduction plan. The initial plan developed by IOC headquarters personnel eliminated about 1,720 depot jobs. The proposal would have affected personnel at three of the five maintenance depots—Corpus Christi, Letterkenny, and Red River. To determine the staff reduction plan, IOC headquarters used a methodology that considered direct labor requirements, overhead requirements, and employee overtime estimates. We analyzed these factors and determined that (1) the direct labor requirements were based on unproven productivity assumptions, (2) the overhead personnel requirements were based on an imprecise ratio analysis, and (3) unrealistic quantities of overtime were factored into the analysis. Table 2.2 shows the number of positions originally scheduled for elimination at each depot for fiscal years 1998 and 1999. To determine and justify the number of required direct labor employees, IOC divided the total anticipated workload (measured in direct labor hours) by a productive workyear factor. This factor represents the amount of work a direct labor employee is estimated to be able to accomplish in 1 fiscal year. IOC used a variety of assumptions to support its position that the number of depot personnel could be reduced. IOC’s analysis used productive factors that are substantially higher than either the DOD productive workyear standard or the historical average achieved in the recent past by Army depots. For example, IOC’s analysis assumed that each Corpus Christi depot direct labor employee would accomplish 1,694 hours of billable time, not including paid overtime hours, in a workyear. However, while DOD’s productive workyear standard for direct labor depot maintenance employees is 1,615 hours per person, the Corpus Christi depot direct labor employees averaged a reported 1,460 hours of billable time in fiscal year 1997 and 1,528 hours in fiscal year 1996. By using the higher productivity level, the IOC analysis showed the Corpus Christi depot would need 14 percent fewer employees, based on the change in this factor. Table 2.3 provides a comparison of IOC’s worker productivity assumptions for each depot and the actual reported productivity levels for fiscal year 1997. While the DOD productive workyear standard assumes that each direct labor worker will achieve 1,615 hours of billable time each year, the depots have been unable to achieve this goal. Several factors affect this productivity level. First, due to workforce seniority, Corpus Christi depot workers have recently reported using an average of 196 hours of paid annual leave per year. This is higher than the reported 175 hours of annual leave used on average at all Army activities as well as the reported 167-hour average annual leave used at other government agencies. In addition, Corpus Christi depot employees used a reported average of about 112 hours of sick leave per year—more sick leave than they earn in a given year and about 50 percent higher than other Army, DOD, and government activities. The reported Army-wide average sick leave use was 73 hours; the DOD average, 78 hours; and the governmentwide average, 74 hours. Several depot management officials commented that while they monitor sick leave usage, it has increased partly as a result of the older workforce and partly as a result of the Federal Employees Family Friendly Leave Act, Public Law 103-338, October 22, 1994, which allows the use of sick leave to care for family members and for bereavement purposes. Second, because most depot employees at the Corpus Christi and Red River depots are working a compressed work schedule of four 10-hour workdays, they receive 100 hours of paid holiday leave per year. In contrast, a government employee who works a 5-day 8-hour workweek, receives 80 hours of paid holiday leave per year. Third, the depots’ direct labor workers charge varying amounts of overhead (nonbillable) time for training, shop cleanup, job administration, temporary supervision, certain union activities, and other indirect activities. In fiscal year 1997, direct labor workers’ charges to overhead job orders ranged from a reported average of 125 hours at the Letterkenny depot to 205 hours at the Corpus Christi depot. To determine and justify the number of required overhead employees, IOC used a ratio analysis that essentially allowed a specified percentage of overhead employees for each direct labor worker. IOC officials told us that they believed the depots had too many overhead personnel and they had developed a methodology to base overhead personnel requirements on predetermined ratios of direct to overhead employees. IOC developed its methodology and the ratios based on actual direct and overhead employee ratios for a private-sector firm tasked with operating a government-owned, contractor-operated Army ammunition plant. Different ratios were assigned based on the number of functions each depot organization performs—such as maintenance, ammunition storage, or base operation support. The IOC ratio analysis assumed that for every 100 direct labor employees, a single-function depot organization could have no more than 40 overhead personnel, a dual-function depot organization no more than 50 overhead personnel, and a three-function depot organization no more than 60 overhead personnel. Table 2.4 provides a summary of ratios IOC used to determine the number of overhead employees. A number of concerns have been raised about the use of these ratios. For example, in 1997 the then Deputy Under Secretary of Defense (Logistics) stated that the use of such ratios may provide only marginal utility in identifying potentially excess employees and inefficient depot operations. He noted that ratio analysis may not consider the value of productivity enhancements that result from the acquisition of increasingly sophisticated technology to accomplish depot missions, which in turn causes direct labor requirements to decrease, while the overhead labor requirements increase. Depot officials similarly noted that technology enhancements over the past few years have significantly reduced direct labor requirements, while sometimes increasing overhead in the depots, particularly when training and maintenance costs increase. They noted that IOC’s methodology did not consider the impact of various efficiency enhancements that eliminated substantial numbers of direct labor positions and added a smaller number of overhead positions. These enhancements include the replacement of conventional labor-intensive lathes with state-of-the-art numerically controlled devices, hundreds of conventional draftsmen with a few technicians having computer-aided design skills, and numerous circuit card repair technicians with multimillion-dollar devices that make and repair circuit cards. Our discussions with depot officials and a support contractor raised similar concerns, including not considering and analyzing (1) differences in the complexity of work being performed in different depots, (2) requirements for government organizations to maintain certain overhead activities that are not required in the private sector, (3) differing policies in the way depots classify direct and overhead labor, (4) allowances for private sector contractors that perform supplemental labor, (5) the extent to which direct personnel work overtime, and (6) the extent to which contractors perform overhead functions. Army officials stated that the ratios were not developed using a sound analytical basis, but said that determining overhead requirements is not, by its very nature, a precise science. While we recognize the challenge that this presents, we have stated in the past that until a costing system, computer-based methodology, and 12-step methodology are fully developed and integrated, the Army cannot be sure that it has the most efficient and cost-effective workforce. Although the 12-step process also calls for the use of ratios in some cases, these ratios are based on methodologies that produce finer degrees of precision. The process also calls for the use of more appropriate mixes of fixed and variable overhead personnel. Nonetheless, we share IOC officials’ concerns that the Army depots have too much overhead. We have reported that this is in part a consequence of having underutilized depot facilities. Thus, personnel reductions alone, without addressing excess infrastructure issues, cannot resolve the Army’s problem of increasing maintenance costs reflected in its depot rate structure. In commenting on a draft of this report, DOD acknowledged that the methodology the Army used to project workload requirements lacked the precision that would have been available if AWPS had been fully implemented and workload projections were more realistic. While DOD stated that the personnel reduction process received intense scrutiny, implementation of its plan achieved its main objective, which was a reduction in indirect personnel costs that it believed would lead to unaffordable rates. IOC’s staff reduction plan was developed using the assumption that when the suggested personnel restructuring was completed the remaining direct labor employees would be expected to work varying amounts of overtime to accomplish their planned maintenance workloads. In fiscal years 1998 and 1999, Corpus Christi Army Depot direct employees would be expected to work overtime that averaged about 16 and 12 percent, respectively, of their regular time hours. IOC personnel stated that it is less expensive to pay overtime rates than to have more employees charging an equivalent number of straight time hours, particularly given the uncertainties regarding the amount of forecasted workload that might not materialize. Historically, Army depot employees have performed varying amounts of overtime. For example, in fiscal year 1996, the Army maintenance depots reportedly averaged 13-percent overtime, with individual depot overtime rates ranging from a low of about 4 percent at the Tobyhanna depot, to a high of about 19 percent at the Corpus Christi depot. Although Corpus Christi originally planned for about 6-percent overtime for direct personnel during fiscal year 1998, the plan was revised to its current 15.8-percent overtime plan and unplanned requirements caused average reported overtime by direct employees to approach 30 percent in some months, with individual rates ranging from 0 to over 50 percent. Using overtime could provide a cushion against workload shortages, as opposed to a short-term alternative of hiring people to cover unanticipated increases in workloads; however, to plan for average overtime rates of up to 15.8 percent appears to be beyond the norm for such types of activities, particularly when unplanned requirements could drive the overtime usage substantially above the levels that were planned. For example, we compared the 1997 Bureau of Labor Statistics durable goods manufacturing work week, including overtime, which averaged about 42.8 hours, with comparable data for Corpus Christi and noted that a 15.8-percent overtime figure corresponds to a 46.3 hour work week, while 30 and 50 percent overtime figures correspond to workweeks of 52 and 60 hours, respectively. AMC efforts to implement its planned reductions at its Corpus Christi depot proved to be extremely chaotic and resulted in unintended consequences. The enactment of section 364 of the 1998 Defense Authorization Act restricted further personnel reductions, except those that are BRAC-related. Army officials stated that when it became apparent that the incentivesbeing offered to indirect personnel in exchange for voluntary employment terminations would not achieve the desired reduction of 336 employees, similar offers were extended to include direct personnel. These officials stated that incentive offers were made to direct labor employees, only when the position held by the terminated direct laborer could be filled by an indirect labor person, who otherwise would face involuntary separation. Notwithstanding that requirement, any depot employee—indirect or direct—was allowed to separate until the desired goal of eliminating 336 employees was reached. Consequently, some direct employees separated, which further exacerbated an existing productivity problem. The congressional action followed and postponed completion of the staff reduction plan until AWPS was certified as operational. According to headquarters AMC officials, command industrial activities had too many overhead personnel and the depots could eliminate some of these positions without adversely affecting productivity. To avoid an involuntary reduction in force targeting overhead positions, they developed a plan to encourage voluntary separations. AMC authorized the use of financial incentives, including cash payments and early retirement benefits, and authorized the extension of this offer to direct personnel. At the Corpus Christi depot, 336 personnel voluntarily terminated their employment in 1997 under the Army’s staff restructuring plan—55 personnel left through normal attrition and 281 personnel were offered financial incentives to encourage their terminations. In June and July 1997, this latter group was tentatively approved for various financial incentives in return for voluntary termination of employment. By the end of June 1997, paperwork authorizing voluntary retirements with cash incentives was approved for some employees while still pending for others. Some left the Corpus Christi area thinking they had been granted authorization to leave and receive cash incentive payments. However, at this same time, headquarters AMC was addressing numerous questions regarding the appropriateness of the staff reduction effort, given the size of the depot’s scheduled workload. As a result of these questions, Headquarters, AMC, asked the Army Audit Agency to review and comment on the documentation supporting the recommended staff cuts. Army Audit Agency personnel compared the IOC’s assessment of personnel requirements against computer-generated forecasts from the AWPS, which was still under development. The auditors, using AWPS-generated products as their primary support, concluded on June 27, 1997, that personnel cuts were not necessary. Furthermore, the auditors concluded that, based on AWPS calculations, rather than lose personnel, Corpus Christi depot would need to hire 44 additional personnel. On July 1, 1997, in response to the Army Audit Agency findings, the Army directed its personnel offices to stop processing paperwork for voluntary separations and financial incentives. On July 2, 1997, Corpus Christi personnel officers were directed to recall the more than 190 employees whose applications had not been fully approved. This event caused a great deal of concern, both among the affected personnel and the workforce in general. According to cognizant Corpus Christi depot personnel officials, some of the employees had taken separation leave, others had sold their residences, and still others had moved out of state and bought new homes. Subsequently, the Army organized a task force including representatives from AMC, IOC, the Army Audit Agency, and depot management to review and validate information contained in the AWPS computational database. The team found that one major Corpus Christi customer had incorrectly coded unfunded workload requirements totaling $70 million as if they were funded, having the effect of overstating personnel requirements. This process left unclear the precise number of employees that were needed to support the approved depot workload. Nevertheless, after 3 to 4 weeks of what depot officials described as zero productivity, the Army declared that documentation supporting IOC’s recommended reductions was accurate and employees were given permission to depart. In offering financial separation incentives at the Corpus Christi depot during fiscal year 1997, AMC did not limit the separation opportunities to overhead personnel. They did not think the desired number of workers would volunteer, if the incentives were restricted to overhead personnel only. Further, headquarters personnel did not want to require involuntary separations. Of the 281 personnel separating with incentives from the Corpus Christi depot, 147 were classified as direct labor and 134 as overhead personnel. Including those separating without incentives, 187 direct labor employees were separated from Corpus Christi. Given the potential imbalances in the workforce caused by the planned personnel separations, Corpus Christi management and union personnel jointly developed a plan to transfer indirect employees to fill vacated direct labor jobs. These procedures were adopted before any incentive offers were made and were designed to avoid the involuntary separation of indirect personnel by retraining them to assume direct labor jobs vacated by senior personnel accepting incentive offers. The plan required that 49 overhead employees complete various training programs before they could assume the targeted direct labor position. However, progress toward achieving these objectives has been slower than expected. The depot initially expected to backfill vacant direct labor jobs by January 1998, but in May 1998 when we visited the depot, only one-third of the 49 overhead personnel scheduled to be retrained had moved to their newly assigned jobs and begun their conversion training and by mid-July, depot officials advised that 80 percent had moved to new positions. In commenting on a draft of this report, Army officials stated that these conversions were scheduled to be completed in November 1998. However, depot officials also told us that it takes between 3 and 4 years to retrain a typical indirect employee as a direct employee. According to depot personnel, the loss of 187 experienced direct labor employees exacerbated the existing productivity problem at the Corpus Christi depot. To fill in the need for direct labor, employees worked a reported average of 19 percent overtime, and the depot had to use 113 contractor field team personnel in addition to the 70 contractor personnel already working in the depot. Nonetheless, the depot has had major problems meeting its production schedule and, as discussed further in the next chapter, may lose repair work from the Navy, except for crash damage work. Subsequently, the Congress enacted the section 364 legislation, which was effective November 18, 1997, postponing involuntary reductions until the Army had certified it had an operational automated system for determining workload and personnel staffing. As a result, the balance of IOC’s proposed staff reductions planned for fiscal year 1999 was deferred. Army efforts to develop AWPS have proceeded to the point that required certification to the Congress of its operational capability is expected soon. Even so, efforts will be required to ensure that accurate and consistent workload forecasting information is input to the system as it is used over time. The Army recently completed development and prototype testing of a system enhancement to provide automated support for determining indirect and overhead personnel requirements. Based on our draft report recommendations, the Army plans to postpone AWPS certification until this system improvement is operational at all five maintenance depots. In May 1996, the Army completed installation and prototype testing of the AWPS at the Corpus Christi Army Depot. In June 1997, it announced plans to extend the AWPS process to other Army industrial facilities, including manufacturing arsenals and ammunition storage sites. At the same time, the Army expected that implementation of AWPS at the five maintenance depots would be completed in August 1997. Congressional certification as required by section 364 of the 1998 Defense Authorization has not yet occurred. In March and April 1998, a team of representatives from various AMC activities, in consultation with the Army Audit Agency, developed AWPS acceptance criteria, that were later accepted by the Assistant Secretary of the Army for Manpower and Reserve Affairs. Army auditors compared acceptance criteria to actual demonstrated experience and reported that the system is operational at all five depots, system programming logic is reasonably sound, and AWPS performance experience satisfies the Army’s acceptance criteria. In August 1998, Army officials stated that the Secretary of the Army could make the mandated certification of successful implementation of computer-based workload and personnel forecasting procedures at Army maintenance depots within the next few months. Army officials stated that several planned system enhancements have not yet been implemented, but they do not believe these items would preclude the Secretary from certifying successful completion of AWPS implementation. However, in its written comments to our draft report, DOD stated the Army now plans to postpone AWPS certification until an automated support module for determining indirect and overhead personnel requirements is fully operational at each of the five maintenance depots. Assuming successful system implementation, future reliability of the system will depend upon the availability and entry of accurate and consistent data imported to AWPS and used to generate system products. The AWPS system provides three primary management information products—information concerning production performance on specific project orders and information concerning workload forecasts and related workforce requirements. The AWPS system receives and processes data from several computerized Army support systems, including the Standard Depot System, Automated Time Attendance and Production System, Headquarters Application System, and Maintenance Data Management System. The Standard Depot System and Automated Time and Attendance System input project status and expense information from the depot perspective. The Headquarters Application System provides status and planned workload data from the IOC perspective, and the Maintenance Data Management System provides workload data from the Army commodity command (major customer) perspective. Army leadership, in 1997, asked the Army Audit Agency to review and validate the proposed depot personnel reductions. Although the system was still being developed, this early experience demonstrated the vulnerability of personnel requirement statements if the computational database contains errors and inconsistencies. The Army Audit Agency identified problems that resulted because AWPS-generated staffing estimates were based on inaccurate workload forecasts imported to the AWPS computational database. During the implementation period, the Army periodically compared AWPS data with similar information contained in the other computerized support systems and found numerous inconsistences. Other data inaccuracies stemmed from employees’ not correctly charging time to job codes on which they were working and the reporting of job codes that were not recognized by the AWPS system. In July 1998, the Army Audit Agency reported that comparisons of data contained in AWPS and several support systems have improved to the point that system managers believe the system logic and AWPS-processed data are reasonably sound. As of August 10, 1998, the Army had not updated and entered several critical items into the automated workforce forecasting subsystems. These items included (1) updating personnel requirements for overhead personnel based on the approved 12-step process and (2) developing a database of employee skills and a breakdown of depot workload tasks by required job skills. However, as noted in its comments to a draft of this report, DOD stated that the Army planned to postpone certifying this system as operational until it incorporates automated procedures for determining indirect personnel requirements. This should enhance the effectiveness of the AWPS system. AWPS was initially envisioned only as a tool for documenting requirements for direct labor. However, in May 1998 the Army determined that it would integrate an automated version of the 12-step process into the AWPS system. The model estimates for each maintenance shop and support function the required fixed and variable overhead personnel that are needed to support the direct workload. Because the model is customized to meet individual depot needs, a 50-person sheet metal shop may have overhead requirements different from a similarly sized electronics shop. In October 1998, Army officials stated that the Army had installed an automated 12-step process for predicting overhead personnel requirements at each of the five maintenance depots and that the depots were developing input data required by the system’s computational database. The Army also plans to enhance the current AWPS system by adding an automated database reflecting specific skills of each depot’s employee. Work on this system enhancement is expected to be completed in January 1999. The Army anticipates that the automated database will enable the depots to estimate personnel requirements for each specific job specialty and facilitate identification and movement of skilled workers between shops to offset short-term labor imbalances. The Army did not have a sound methodology for projecting workforce requirements; this led to a highly undesirable set of events that resulted in the voluntary separation of direct labor employees, which negatively impacted employees and depot productivity. Also, given the need to use contract labor and the plan to have depot employees consistently work substantial amounts of overtime, it is questionable whether all of the reductions of direct labor personnel were appropriate. This situation also illustrates the challenge of targeting reductions at the depots in areas where there are excess personnel and providing the required training to workers when skill imbalances occur, as a result of transfers. We believe the Army’s inability to deal with the perceived need for reducing overhead requirements prompted the chaotic staff reduction effort at the Corpus Christi depot. Further, incorporation of the capability to address overhead requirements is an essential element of an effective AWPS system. The Army’s current plan to postpone certifying the AWPS system as operational until it incorporates procedures for determining indirect personnel requirements should enhance the overall effectiveness of the system. We recommend that the Secretary of Defense require the Secretary of the Army, in making future personnel reductions in Army depots, to more clearly target specific functional areas, activities, or skill areas where reductions are needed, based on workload required to be performed. We also recommend that the Secretary of the Army complete incorporating an analysis of overhead requirements into AWPS prior to certifying the system, pursuant to section 364. DOD concurred with the recommendations. It stated that the development and testing of an automated process for predicting indirect and overhead personnel requirements would be completed before the system is certified as operational at maintenance depots. We modified our conclusions and recommendations to reflect the actions being taken by the Army in response to our draft report. Specifically, we now recommend that the Army complete ongoing actions that it initiated in response to our draft report recommendations. We also incorporated technical comments that were provided by DOD where appropriate. While the Army has made progress in establishing an automated process for analyzing and documenting personnel requirements, it is still faced with larger issues and factors that overshadow efforts to improve workload forecasting and efficient depot operations. First, workload estimates have been subject to frequent fluctuation and uncertainty to such an extent that it is difficult to use these projections as a basis for analyzing workforce requirements. Second, DOD and Army policies have resulted in the transfer of Army depot workloads to other government-owned repair facilities and private sector contractors without corresponding reductions in depot facilities and capacity. It is uncertain to what extent workloads will be assigned to Army depots in the future. Third, depot efficiency has been impacted by other factors—lower than anticipated worker productivity, inefficient use of personnel resources, and the timely availability of certain necessary repair parts. Workload estimates for Army maintenance depots vary substantially over time due to the reprogramming of operations and maintenance appropriation funding and unanticipated changes in customer requirements. The Army’s personnel budgets and staffing authorizations are generally based on workload estimates established 18 to 24 months before new personnel are hired or excess employees are terminated. Therefore, if actual workload is less than previously estimated, the depot is left with excess staff. Conversely, if actual workload is greater than previously estimated, the depot would have fewer staff than it needs to accomplish assigned work. Our work shows that workload estimates are subject to such extensive changes that they hamper Army depot planners’ ability to accurately forecast the number of required depot maintenance personnel. In discussing similar issues with Navy shipyard personnel, we noted that in April 1996, the Navy issued guidance to encourage shipyard customers to adhere to the workload plans established during the budget process. Navy leadership found that past weaknesses in workload forecasting contributed to inefficient use of depot resources, which led to higher future operating rates to compensate for previously underutilized shipyard personnel and facilities. After implementing a guaranteed workload program to stabilize work being assigned to naval shipyards, these activities report having 3 years of positive net operating results, after operating at a loss for over 5 years. Appropriated operations and maintenance funding for the depot-level maintenance business area—a key source of depot maintenance funding—is reprogrammed by the Army to a much greater extent than funds for other operations and maintenance appropriation business areas and create challenging fluctuations in workload execution. Table 3.1 shows the amount of depot maintenance funding the Congress appropriated for fiscal years 1996, 1997, and 1998 and the amounts later reprogrammed to cover funding shortfalls in other programs. For comparison purposes, table 3.1 provides the same information for the balance of the Army’s operations and maintenance funding. As indicated, funds for depot maintenance were reprogrammed at a much higher rate than funds for the other operations and maintenance business areas. The non-depot maintenance business areas provide funding for civilian salaries and private sector contractor support—funds that the Army generally has considered must be paid. The depot maintenance programs for the in-house overhaul and repair can be easily terminated without cost to the government. Army officials explained that when depot orders are terminated, financial losses are recovered by charging higher rates to future customers . However, if contracted work is terminated for the convenience of the government, the government often has to pay for expenses incurred by the contractor. While Army officials stated that previous practices resulted in an inequitable distribution of funding transfers, they stated that they planned to conduct future reprogramming actions on a more equitable basis. Unanticipated funding transfers as a result of reprogramming actions have impacted depot staffing and contributed to inefficient depot operations. For example, we estimate Army reprogramming actions moved funding that might have supported about 1,400 direct labor positions and 750 overhead positions in fiscal year 1996. Similarly, reprogramming actions in fiscal year 1997 moved funding that might have supported about 1,125 direct labor positions and 650 overhead positions. These reprogramming actions contributed to net operating losses in the years cited and higher rates in subsequent years. AMC holds semiannual workload conferences to review, analyze, and document depot workload estimates. Our work shows that the command’s estimates can differ significantly from reported spending, limiting their value in documenting personnel budgets and requirements. For example, in September 1994 the predecessor organization to the current Aviation and Missile Command estimated that in fiscal year 1997 it would generate workload requirements and provide funding to the Corpus Christi depot valued at about $161 million for the repair of aviation components. At the beginning of the fiscal year 1997, the projected workload value for that year decreased to $141 million—a 12-percent reduction. Moreover, the funded workload for that year was less than $94 million—a decline of 42 percent from the amount projected almost 3 years earlier. It is important to note that the rates for fiscal year 1997 were developed using the workload estimates projected in 1994. Partially as a result of the decreased workload, Corpus Christi did not receive the revenues it needed to break even. Losses for that year contributed to the need for increased rates in subsequent years. Army officials attributed the decline in forecasted workload to reduced workload requirements resulting from slower-than-expected customer revenues from the sales of repaired items and cash shortages in the Army’s working capital fund. Reduction of work typically results in underutilized personnel and can result in orders being placed for long lead-time parts that are not needed as expected. The workload expected from the Aviation and Missile Command, but not received, might have provided work for about 250 direct labor employees and 150 overhead employees for a year. Workload estimates for overhaul and repair requirements generated by the other military services have also been inconsistent. For example, in September 1995, the Navy estimated that it would provide fiscal year 1998 funding for the overhaul of 38 helicopters at the Corpus Christi depot. In May 1997, the Navy estimated that in fiscal year 1998 it would fund the overhaul of 22, but in October 1997, it estimated the funded workload that would likely materialize during fiscal year 1998 would support the overhaul of only 12. Navy officials told us the estimated helicopter overhaul requirements were reduced, in part, because the Army was unable to complete prior year funded repair programs within agreed time frames. Additionally, the Navy is exploring ways to have future overhaul and repair work done incrementally by either contractor or government employee field teams working at Navy bases. The Navy believes the incremental overhaul and repair process can be done more expeditiously. At this point, it is unclear what role the Corpus Christi depot will play in providing future overhaul and maintenance support for Navy helicopters. Figures 3.1 and 3.2 depict the fiscal years 1997 and 1998 funding estimates for the Corpus Christi Army Depot at various points in time. For example at the start of fiscal year 1995, the Army anticipated that the Corpus Christi depot would receive fiscal year 1997 funding for workloads valued at $349 million. Two years later, at the start of fiscal year 1997, the estimate increased to $355 million, compared to actual funding of $326 million. On the other hand, at the beginning of fiscal year 1996, the anticipated workload for the depot was valued at about $302.5 million. At the beginning of fiscal year 1998, the anticipated total had risen to about $333.5 million, and in June 1998, estimates of revenues for the year were about $360.5 million. Depot officials pointed out that with these variances in workload, it is almost impossible to set accurate rates or to project with precision the number of employees needed to perform the required work. This experience at Corpus Christi illustrates the challenge depot planners face in projecting personnel requirements when the workload estimates change considerably over the 30 months between the time rate-setting is initiated to the end of fiscal year for which rates have been set. Similarly, under these conditions it is also difficult for budget personnel to set labor-hour rates that will generate the desired net operating result. As part of its overall depot maintenance strategy, the Army has established policies and procedures for assigning potential depot workloads to other government-owned repair facilities and the private sector. These practices have significant cost effectiveness and efficiency implications for the depots, given the amount of excess industrial capacity that exists. First, AMC has authorized performance of depot-level workloads at government-owned repair sites located on and near active Army installations and at National Guard facilities. Second, Army policies and strategic plans emphasize the use of the private sector for depot-level maintenance workloads, within existing legislative requirements. In recent years the Army’s Forces Command and its Training and Doctrine Command. have operated an increasing number of regional repair activities at active Army installations. Additionally, the Army National Guard operates regional repair activities at state-owned National Guard sites. Collectively, these repair activities are categorized as integrated sustainment maintenance (ISM) facilities. Sustainment maintenance includes repair work on Army equipment above the direct support level, including general support and depot-level support tasks. Accordingly, Army headquarters has allowed some ISM sites to perform depot-level workloads under special repair authorities. ISM repair sites are staffed by a mixture of military and civilian federal employees, state employees, and contractors. AMC officials stated that ISM repair sites can perform depot-level work to save transportation costs, expand employee skills and capabilities, and shorten repair cycle times. We noted that many of the items requiring depot repair are being shipped to other bases’ ISM repair sites, under a center of excellence program that is designed to assign work to the most cost effective repair source. We did work at Army ISM facilities located at Fort Campbell, Kentucky, and Fort Hood, Texas, and an Aviation Classification Repair Activity Depot operated by the Connecticut National Guard. We noted that each facility was performing depot-level work that was similar, and sometimes identical, to work currently being conducted at the Corpus Christi Army Depot. For example, each repair site operated environmentally-approved painting facilities large enough to strip and repaint an entire helicopter—a task also being conducted at the Corpus Christi depot. Further, the National Guard facility was refurbishing Blackhawk helicopters—a task identical to work currently assigned to the Corpus Christi depot. Additionally, each facility will undergo or has recently undergone expansion and modernization. For example, the Fort Hood repair facility, which was constructed in 1994 at a reported cost of about $60 million, is scheduled for further expansion, and the National Guard facility was recently doubled in size at an estimated cost of $20 million. ISM repair sites are not working capital fund activities. Repair work at these sites is financed through direct appropriations to the operational units, which obligate a level of funding at the beginning of the year. Field-level personnel believe they get a better value for repair work that is performed at the unit level than at the depots and prefer to use field level repair whenever they can. The continuing reliance and expanded use of regional repair facilities for depot-level workloads could have a substantial impact on the future viability and efficiency of operations at the Army’s public sector depots. While the overall impact on the depots’ workloads has not been estimated, an AMC report shows that in fiscal year 1996, ISM and similar repair facilities received at least $51 million for depot-level tasks. AMC personnel told us they believe the actual amount of depot-level work is much higher because not all depot-level tasks and related work is reported. Further, DOD’s 1998 logistics strategic plan envisions the eventual elimination of the public depot infrastructure by expanding the use of regionalized repair activities across all levels of maintenance and contracting more workloads. Lastly, an AMC reorganization proposal suggests that the current Corpus Christi Army Depot functions could be transferred to the four National Guard Aviation Classification Repair Activity Depots. In commenting on a draft of this report, DOD stated that the Army approves Special Repair Authorities to enable regional repair facilities to conduct specific depot-level maintenance tasks for a specified number of items, after it evaluates the impact on depot workloads and core capabilities. However, our work shows that some Special Repair Authorities were granted for varying numbers of items to be repaired over prolonged time frames creating some uncertainty over how well the long-term impact on depot workloads and core competencies may have been assessed. Some Army officials told us that Army reviewers have historically had little incentive to recommend disapproval of proposed Special Repair Authorities since they would likely be overruled by higher headquarters. More recently, Army headquarters officials told us they began to reject a number of proposed Special Repair Authorities and that they are undertaking a study to reevaluate the Special Repair Authorities process. DOD strategic plans and policies express a preference for assigning depot-level workloads to the private sector rather than public sector depots. Recent DOD policies and plans show that DOD expects to increasingly outsource depot maintenance activities, within the existing legislative framework. For example, the DOD logistics strategic plan for fiscal years 1996 and 1997 envisions that it will develop plans to transition to a depot-level maintenance and repair system relying substantially on private sector support to the extent permitted under the current legislative framework. The 1998 plan states that DOD will pursue opportunities for eliminating public sector depot maintenance infrastructure through the increased use of competitive outsourcing. Further, in March 1998 we reported, overall, DOD is moving to a greater reliance on the private sector for depot support of new weapon systems and major upgrades, reflecting a shift from past policies and practices, which generally preferred the public sector. In that regard, the Secretary of the Army has announced plans to pursue several pilot programs that would make the private sector responsible for total life-cycle logistics support, including depot-level maintenance and repairs. DOD policy also emphasizes the use of private sector contractors for modifications and conversions of weapon systems. For example, in August 1996 the Army awarded a multiyear contract for the upgrade of Apache Longbow helicopters. While it is difficult to predict the number of depot maintenance jobs affected by this policy, the Army Audit Agency reported in June 1998 that the Apache Longbow modification, conversion, and depot maintenance workload will likely involve from 2,063 to 2,998 personnel. In June 1998, the Secretary of the Army identified two weapon systems—the Apache helicopter and the M109 combat vehicle—to potentially pilot test prime vendor support concepts. Under this concept, private sector firms would provide total life-cycle supply and maintenance support. It is uncertain if or when these prime vendor contracts will be awarded, or what impact this would have on future workload and staffing of Army depots. We identified several factors contributing to depot inefficiency, including (1) the less-than-expected productivity, (2) excess depot capacity, (3) the lack of flexibility to shift workers among different functions, and (4) the nonavailability of parts. Additionally, we have previously reported that the Army’s current repair pipeline is slow and inefficient and could be improved by implementing various private sector best practices, several of which are being considered at the Corpus Christi depot. Although DOD’s depot productive workyear standard for depots was 1,615 hours, for fiscal year 1997, each of the Army depots reported productive levels below the standard (see table 2.3). Additionally, at the Corpus Christi depot, we noted that the hours required to complete depot maintenance projects exceeded the standard, which serves as the basis for payment, resulting in significant losses for that fiscal year. The most significant productivity problem at the Anniston depot appeared to be that the expected levels of work that had been programmed did not materialize, including work that was expected to transition from the Red River and Letterkenny depots as a result of BRAC decisions. Anniston officials said they were reluctant to eliminate positions since the additional work should show up during 1998. Thus, in the short term, the workforce did not have enough work to keep it fully employed. At Corpus Christi, the inability to complete work within scheduled time frames was a problem. As previously discussed, the use of large amounts of sick leave and annual leave and more holiday leave than other depots contributed to this problem. At the same time, we noted that this depot used premium pay in the form of overtime to a much greater extent than other Army depots. We also noted that specific projects at Corpus Christi had consumed significantly more hours than projected, resulting in financial losses and schedule delays. For example, on average, depot employees charged 22,422 direct labor hours for each Seahawk helicopter repaired, compared to the projected goal of 12,975 hours per aircraft. In commenting on a draft of this report, DOD officials stated that this situation was caused by a variety of factors, including lack of access to Navy managed parts, lack of experience with some Navy-unique systems, and the fact that Navy helicopters were in worse physical condition than most comparable Army helicopters being inducted for overhaul work. Cumulative financial losses on the completed overhaul and repair of 29 Navy Seahawk helicopters are estimated at about $40.1 million, and total reported losses on completed Navy helicopters exceed $80 million. Recognizing these problems, the Army has implemented a process reengineering plan to reduce the average repair cycle from the current 515 days to 300 days. As previously noted, the Navy is considering shifting repair work to field teams at Navy units. Since Navy work is about 30 percent of Corpus Christi’s workload, the depot could lose 400 to 500 direct labor positions and increase its estimated future operating rates by about $20 per hour. Similarly, time charged against the overhaul of the T-53 engines used on the Huey helicopter was about 52,000 direct labor hours for 60 engines, compared to the projected goal of about 23,000 hours. The Army is considering plans to contract with the private sector for the performance of this work. At this time, it is uncertain what role, if any, the depot will have in future T-53 engine repair programs. While the Army has not clearly articulated its long-range plans for its five depots, in the past it has stated that only three are needed, and more recent actions suggest that number may be even smaller. As discussed in a 1996 report, each of the five remaining depots has large amounts of underutilized production capacity which require substantial financial resources to support. For example, the Army recently reported that its depots have capability to produce about 16 million hours of direct labor output, given the current plant layout and available personnel. The report also states that in fiscal year 1998 depots will produce an estimated 11 million hours of direct labor output, meaning that 68 percent of the available plant equipment and personnel are fully utilized on a single shift, 40-hour week. Further, the depots are capable of producing even greater amounts of work. Until recently, no attempt had been made to look at maintenance capability from a total Army perspective, including capability at the field level and in the National Guard. In commenting on a draft of this report, DOD cited several examples of efforts that they are starting to analyze maintenance requirements from a total Army perspective. For example, the ISM concept is designed to integrate and coordinate maintenance provided by active Army units, Army reserve activities and the Army National Guard installations. In addition, DOD stated that the Army will establish a Board of Directors to manage and coordinate depot-level maintenance from a total Army perspective. Improved systems and procedures for shifting workers between different organizational units and skill areas would offer better opportunities to effectively use limited numbers of maintenance personnel. Depot officials noted that prior practices made it difficult to transfer workers between organizational units and skill areas to adjust for unanticipated work stoppages caused by changes in work priorities, parts shortages, technical problems, or temporary labor imbalances. For example, in late 1997 work was suspended on repair of the T-53 engine at Corpus Christi due to a safety of flight issue, but personnel in that shop were not reassigned to other areas whose work was behind schedule. Depot workers are trained in specific technical areas and perform work within their specific specialty code and organizational units. Agreements between the unions and the depots generally require that workers be assigned work only in their specialty areas; therefore, depot managers have limited capability to move workers to other areas. Depot managers noted that, in some cases, a worker could work in another area under the direction of a qualified specialist in the second skill area. Union officials at one depot stated that members understand the benefits of more flexible work agreements, but in the past have been reluctant to adopt them. Depot managers cited a number of ongoing efforts that should, in the future, lead to more effective use of skilled depot workers. For example, depot managers said they were encouraging their workers to take courses during their off-duty time to develop multiple skills. Further, depot officials said completion of an ongoing AWPS system enhancement project will provide an automated database reflecting the specific skills of each depot employee to facilitate identification of workers with the skills that are needed to meet short-term labor imbalances. Lastly, depot mangers are considering changes to organizational structures to better facilitate movement of skilled workers between shops. In discussing this issue with Navy officials, we were told that when the Navy transferred civilians from the Pearl Harbor Shipyard to an intermediate activity at the same location, they implemented a program known as multi-crafting or multi-skilling through which workers trained in a second, complementary skill area so that they were qualified to do more tasks. Workers in seven different workload combination areas were involved in the program and received training in multiple skill areas. In the rubber and plastics forming skill area, cross-trained workers got a pay raise in addition to the satisfaction of knowing they were multi-skilled and more valuable employees. Maintenance facility managers said that the added flexibility of multi-skilling allowed them to use a limited number of workers more cost effectively and to be more responsive to emerging requirements. While we have not evaluated the extent to which the use of multi-crafting and multi-skilling has improved the efficiency of the Navy’s combined operations, in concept it is in line with best practices employed by the private sector and appears to have merit. In commenting on a draft of this report, DOD stated that the Army’s direct labor personnel can become multi-skilled through support of the labor unions. They noted that while depot managers have the right to assign employees to specific work areas, they need to work with labor organizations to adopt more flexible work arrangements through collective bargaining or other partnering arrangements. Parts shortages have also contributed to inefficient depot operations. For example, we previously reported on the length of time it took to repair and ship parts and an Army consultant recently reported that repair technicians spend as much as 40 percent of their time looking for required parts. Army depots obtain parts from a variety of sources, including the Defense Logistics Agency, inventory control points operated by the military services, the private sector through local purchases, and limited depot manufacturing. Since Army procedures give higher priority in processing orders for parts to operational units and field-level repair activities, parts shortages are more likely to occur at the depot level. Further, parts shortage problems could increase as a result of a recent AMC headquarters decision attempting to eliminate parts inventories that have been procured for future depot use. For example, Corpus Christi maintains an inventory at a reported value of about $37 million for emergent work. AMC plans to have the depots turn in the material without giving a financial credit, a process that could cause the depots to report a financial loss equaling the inventory’s value. Officials at the Corpus Christi depot expressed concern that, without this inventory, their access to aviation parts, especially those that have long leadtimes to order, will deteriorate even more as will their ability to complete their work in a timely manner. According to a Corpus Christi official, depot workers waited an average of 144 days from the time they placed requisitions with the Defense Logistics Agency until orders were received. Additionally, a large number of requisitions placed by the Corpus Christi Army Depot for parts managed by a Navy-operated inventory control point were initially rejected because the automated requisition processing system had not been modified to recognize the Army depot as a valid customer. Although depot supply support depends largely on external sources, Corpus Christi Army Depot has taken actions to address the inefficiencies in the portions of the process they control. For example, a recent study by an Army consultant concluded that the material management process costs the depot an estimated $19 million per year and that a large percentage of these costs represents nonvalue added time spent handling, sorting, retrieving, inspecting, testing, and transporting parts between various local storage locations. A depot official estimated that the process reengineering plan, initiated in May 1997, will reduce the administrative costs by $10 million. Some of these initiatives include reducing (1) the average time required to obtain parts from the local automated storage and retrieval system from 12 to 4 days, (2) the time required to complete local purchase actions from 121 to 35 days, and (3) the number of days to complete local credit card purchases from 49 to 10 days. Even though the Army has made progress in building an automated and more rigorous process for analyzing and documenting personnel requirements, important enhancements remain to be completed. Moreover, other severe problems—including significant fluctuations in funding, rising costs and continued losses in the Army’s military depots—create much instability and uncertainty about the effectiveness and efficiency of future depot operations. Some reductions in the amount of work assigned to the military depots has occurred while such work performed by private sector contractors has increased. Further, by adding to its maintenance infrastructure at Army operational units in the active and guard forces and performing depot-level and associated maintenance at those locations, the Army has been adding to the excess capacity, underutilization, and inefficiency of its depots. The extent and financial impact of this situation is unknown. However, the Army is clearly suboptimizing use of its limited support dollars, and efforts are needed to minimize the duplications and reduce excess infrastructure. The Army needs to adopt reengineering and productivity improvement initiatives to help address critical problems in existing depot maintenance programs, processes, and facilities. We recommend that the Secretary of Defense require the Secretary of the Army to establish policy guidance to encourage AMC customers to adhere to workloading plans, to the extent practicable, once they are established and used as a basis for the development of depot maintenance rates; require reevaluation of special repair authority approvals to accomplish depot maintenance at field activities to determine the appropriateness of prior approvals, taking into consideration the total cost to the Army of underutilized capacity in Army depots; encourage depot managers to pursue worker agreements to facilitate multi-skilling or multi-crafting in industrial facilities; and direct the depot commanders to develop specific milestones and goals for improving worker productivity and reducing employee overtime rates. DOD concurred with our recommendations and described several steps being taken to address our recommendations. For example: AMC recently reemphasized the importance of realistic and stabilized workload estimates to optimize depot capacity utilization, stabilize operating rates, and support future personnel requirements determinations. DOD stated that it recently initiated “A Study of the Proliferation of Depot Maintenance Capabilities” to include an examination of the current approval process for Special Repair Authority requests. DOD stated its intention to work in concert with the Army and other Services to pursue efforts to eliminate excess industrial capacity through future BRAC rounds and facilities consolidation. DOD concurred with our recommendation to pursue multi-skilling or multi-crafting, but stated that such arrangements require implementation by individual depot managers. We have revised our recommendation accordingly. While DOD agreed with our recommendation for developing milestones and goals for improving the efficiency of its depot operations to include reductions in employee overtime rates, it did not specify what actions were planned. We also incorporated technical comments where appropriate. The Army plans to begin installing the new AWPS in its manufacturing arsenals in December 1998. However, it is not clear how effective the system will be in terms of identifying the arsenals’ personnel requirements—given the uncertainty surrounding their future workload requirements. The arsenals are also confronted with larger problems and uncertainties that could diminish the effectiveness of the Army’s efforts to automate the process of determining workforce requirements, stabilize its workforce, and increase productivity. At these facilities there have been significant workload reductions as a result of defense downsizing and increased reliance on the private sector. However, commensurate reductions have not been made to arsenal facilities. The arsenals have sought to diversify to improve the usage of available capacity and reduce their overhead costs, but limitations exist on their ability to do so. The Army is considering converting its two arsenals to government-owned, contractor-operated facilities. However, key questions, such as the cost-effectiveness and efficiency of this option, remain unanswered. The Army plans to begin installing the AWPS system in its two weapons manufacturing arsenals beginning in December 1998 and to complete that installation by September 1999. In June 1998, the Army began installing a prototype AWPS at one of its eight ammunition storage and surveillance facilities. Upon completion of the prototype testing, the Army plans to extend the system to the two weapons manufacturing arsenals. Since the end of the Cold War, workloads and employment at the two remaining arsenals have declined substantially; however, operating costs have continued to escalate as fixed costs have been spread among increasingly smaller amounts of workload. Additionally, personnel reductions have not kept up with workload reductions. At Rock Island, the workload dropped a reported 36.9 percent between 1988 and 1997 while the staffing dropped 30.8 percent. At Watervliet the reported workload dropped 64 percent during the same period while staffing dropped 51.8 percent. As workloads continue to decline, the arsenals have been left with relatively fixed overhead costs, including the salary expenses for an increasing percentage of overhead employees. For example, as of fiscal year 1998, the Watervliet Arsenal reported employing 409 direct labor “revenue producers” and 473 overhead employees compared with 1,089 direct labor workers and 924 overhead employees reported 10 years ago. Table 4.1 compares the arsenals’ workloads in direct labor hours and employment levels at the end of fiscal years 1988 through 1997 and projections for fiscal year 1998. Currently, the arsenals are using only a small portion of their available manufacturing capacity in the more than 3.3 million square feet of reported industrial manufacturing space. An arsenal official estimated that as of April 1998 the Watervliet facility was utilizing about 17 percent of its total manufacturing capacity—based on a single 8-hour shift, 5-day workweek—compared with about 46 percent 5 years ago and about 100 percent 10 years ago. Similarly, as of July 1998, officials at the Rock Island Arsenal estimated the facility was utilizing about 24 percent of its total manufacturing capacity compared with about 70 percent 5 years ago and about 81 percent 10 years ago. Underutilized industrial capacity contributes to higher hourly operating rates. Over the last 10 years, the hourly rates charged to customers increased by about 88 percent at Watervliet and about 41 percent at Rock Island. The Arsenal Act (10 U.S.C. 4532) was enacted in 1920 and provides that the Army is to have its supplies made in U.S. factories or arsenals provided they can do so on an economical basis. The act further provides that the Secretary of the Army may abolish any arsenal considered unnecessary. The importance of the arsenals as a manufacturing source has declined over time. The declining workload noted in table 4.1 is a reflection both of defense downsizing in recent years as well as increased reliance on the private sector to meet the government’s needs. In recent years, the Army has pursued a policy of contracting out as much manufacturing work as possible to the private sector. When work was plentiful for both the arsenals and the private sector during the Cold War years, the allocation of work in accordance with the Arsenal Act was not an issue. However, the overall decline in defense requirements since the end of the Cold War has substantially reduced the amount of work needed. When making decisions based on the Arsenal Act, the Army compares public and private sector manufacturing costs to determine whether supplies can be economically obtained from government-owned facilities—a process referred to as “make or buy”. The comparison is based on the arsenals’ marginal or additional out-of-pocket costs associated with assuming additional work. However, the arsenals report little use of the “make or buy” process. For example, Watervliet reported that it has not participated in a “make or buy” decision since 1989 and has not received any new work through the Arsenal Act since at least then. Rock Island officials could identify only one item for which it received new work through the Arsenal Act in recent years. Officials at both arsenals said they do not expect to receive any future work as a result of “make or buy” analyses. As their workloads have declined, the arsenals have become less efficient, because each remaining direct labor job must absorb a greater portion of the arsenals’ fixed costs. As noted earlier, rates charged to customers have increased significantly in recent years at both arsenals. Some efforts have been made to diversify into other manufacturing areas to better use excess capacity and reduce costs, but limitations exist. AMC headquarters has proposed converting the two arsenals to GOCO facilities. However, key questions—such as how much of this type of capacity is needed, and the cost-effectiveness of the various alternatives—remain unanswered. Unlike maintenance depots, where workload is largely centrally allocated by Army headquarters, arsenal managers market their capabilities to identify potential military customers and workloads. Similar to private sector business, arsenal managers recover operating expenses through sales of products that produce revenues. However, as their volume of work declines, the arsenals must either reduce costs or increase prices to customers. If prices are increased, customers may go elsewhere to satisfy their needs, further exacerbating the declining workload problem. Recent proposals by the Watervliet Arsenal to balance workload and staffing were disapproved by Army headquarters in anticipation of new workloads. However, Watervliet officials stated that, as of October 1998, no new work had materialized and none was expected. This lack of new work could result in greater losses than planned at that facility. Each year arsenal personnel estimate the amount of work they expect to receive and then use this information as a basis for projecting personnel requirements. The expected workload is divided into various categories based on the estimated probability of workload actually materializing. Work that is already funded is categorized as 100 percent certain. Unfunded work is categorized based on its considered probability of becoming firm. Watervliet, for example, uses three probability categories for unfunded workloads: 90, 60, and 30 percent. Staffing is then matched to the workload probability. Staffing needs for fully funded work and work with a 90-percent probability is allocated at 100 percent of the direct labor hour requirements. Staffing requirements for the remaining work is allocated in accordance with the workload probabilities. In October 1997, AMC headquarters gave Watervliet approval to eliminate 98 positions by the end of fiscal year 1998. Also, on the basis of an expected decline in workload in fiscal year 1998, AMC headquarters gave the Rock Island Arsenal approval in May 1998 to eliminate 237 positions for a total arsenal workforce reduction of 335 positions. Employees who voluntarily retire or resign will receive incentive payments, based on a varying scale with a maximum payment of $25,000. These incentives were intended to reduce the number of employees facing involuntary separations. By the end of September 1998, 54 Watervliet and 146 Rock Island employees had accepted incentive offers. As an additional incentive to encourage voluntary separations, the arsenals, in August 1998, received authority to offer early retirements to eligible employees. Both arsenals have tried to develop new areas of work because their traditional weapon-making roles no longer provide enough work to allow them to operate efficiently. For a number of years, Rock Island has been fabricating and assembling tool kits, maintenance trucks, and portable maintenance sheds for the Army, other military services, and civilian agencies. Rock Island personnel involved in this work made up about 22 percent of the arsenal’s total employment in fiscal year 1998. Watervliet has tried to branch out into making propulsion shafts for Navy ships and has done contract work for private industry, making such things as ventilator housings and other metal fabrication items. The Rock Island facility is still selling exclusively to government customers. 10 U.S.C. 4543 requires that the arsenals cannot sell items to commercial firms unless a determination is made that the requirement cannot be satisfied from a commercial source located in the United States. However, section 141 of the 1998 Defense Authorization Act provides for a pilot program enabling industrial facilities including arsenals during fiscal years 1998 and 1999 to sell articles to private sector firms that are ultimately incorporated into a weapon system being procured by DOD without first determining that manufactured items are not available from commercial U.S. sources. As a part of the Army’s plan to reduce personnel positions under the Quadrennial Defense Review, the Army plans to study the cost benefits of converting the arsenals to GOCO facilities. The AMC plans to initiate commercial activity studies for converting arsenal operations in fiscal year 1999. These studies will be conducted under the guidelines specified by OMB Circular A-76. According to an AMC official, the Army has determined that the government should retain ownership of the arsenals; however, operational responsibility could be assigned to a private sector contractor. As a first step in the process, the arsenals are to develop proposed staff structures, documenting the government’s most efficient operating strategy, and commercial offerors will be asked to submit proposals for operating the government-owned facility. A source-selection panel will compare the government’s proposal with offers from private sector contractors. According to an AMC official, if the source-selection panel determines that a private sector offeror would provide the most cost-effective solution, nearly all remaining government employees at the arsenals would be terminated by 2002. If recent workload declines and the consequent workforce reductions at the Rock Island and Watervliet arsenals continue, the long-term viability of these facilities is uncertain. Arsenal workloads have declined to the point that, even with significant personnel losses, their capabilities are significantly underutilized and greatly inefficient. An important part of the future decision making process will be analyzing the cost efficiency of government-owned and -operated facilities compared to the cost efficiency of GOCO facilities. If retention of a government-owned and -operated facility is found to be the most cost-effective option, then decisions will be needed that adjust capacity to better match projected future workload requirements. We recommend that the Secretary of Defense require the Secretary of the Army to (1) assess the potential for improving capacity utilization and reducing excess arsenal capacity, and (2) evaluate options for reducing costs and improving the productivity of the remaining arsenal capacity. DOD concurred with each of our recommendations. It agreed that the Watervliet and Rock Island Arsenals currently support considerable amounts of excess manufacturing capability and stated that both facilities are included in current AMC plans to conduct a complete installation A-76 review to identify the most cost-effective option for future operations, including an evaluation of options for reducing costs and improving productivity. The synergy of the issues discussed in this report highlights a broader and more complex message regarding the effect of unresolved problems that impact the future of industrial operations currently performed in the Army. It also affects the cost-effectiveness of support programs for current and future weapon systems. These problems include the need to (1) clearly identify the workload requirements if‘capabilities are to be maintained in-house, (2) consolidate and reengineer functions and activities to enhance productivity and operating efficiencies, and (3) reduce excess capacity. Resolution of these problems requires that they be considered within the legislative framework pertaining to industrial operations. We have previously cited the need for improved strategic planning to deal with logistics operations and infrastructure issues, such as those affecting the Army’s industrial facilities. The Army faces difficult challenges in deciding what, if any, depot-level maintenance and weapons manufacturing workloads need to be retained in-house to support national security requirements. The 1998 DOD Logistics Strategic Plan states that, in the future, DOD will advocate the repeal of legislative restrictions on outsourcing depot maintenance functions by developing a new policy to obtain the best value for DOD’s depot maintenance funds while still satisfying core capability requirements. Until DOD and the Congress agree on a future course of action, it will be difficult to plan effectively for dealing with other issues and problems facing DOD and the Army’s maintenance programs and systems. If the decision is made to retain certain amounts of in-house depot and arsenal capabilities, it will be important to look at overall maintenance infrastructure, including below depot as well as depot-level maintenance requirements in active as well as reserve forces, to ensure that the minimum level is retained that meets overall military requirements. Consolidation of existing activities, to the extent practicable, within the constraints of operational requirements, will be essential for developing a more efficient and cost-effective support operation. Further, improvement initiatives to address long-standing productivity issues are key to providing required maintenance capability for the least cost. Finally, the elimination of excess capacity—both in the public and the private sector, is another critical area that, if not addressed, will continue to adversely affect the cost of Army programs and systems. A number of statutes govern the operations of Army depots and arsenals. For example: 10 U.S.C. 2464 provides for a DOD-maintained core logistics capability that is to be GOCO and that is sufficient to ensure the technical competence and resources necessary for an effective and timely response to a mobilization or other national emergency, 10 U.S.C. 2466 prohibits the use of more than 50 percent of funds made available in a fiscal year for depot-level maintenance and repair work to contract for the performance of the work by nonfederal personnel. The definition of depot-level maintenance and repair is set forth in 10 U.S.C. 2460, 10 U.S.C. 2469 provides that DOD-performed depot-level maintenance and repair workloads valued at $3 million or more cannot be changed to contractor performance without the use of competitive procedures for competitions among public and private sector sources, 10 U.S.C. 2470 provides that depot-level activities are eligible to compete for depot-level maintenance and repair workloads, and 10 U.S.C. 4532 requires that the Army have its supplies made in factories and arsenals of the United States, provided that they can produce the supplies on an economic basis. DOD has stated that its depot maintenance initiatives would continue to operate within the framework of existing legislation. On the other hand, it has, in the past, sought repeal of these and other statutes and has stated in the DOD Logistics Strategic Plan that it will continue to pursue this option. For several years, we have stated that DOD should develop a detailed industrial facilities plan and present it to the Congress in much the same way that it presented its force structure reductions in the Base Force Plan and Bottom-Up Review. Our observations regarding the need for a long-term plan for Army industrial facilities parallels observations we made in our February 1997 high-risk report on infrastructure. In that report, we credited DOD for having programs to identify potential infrastructure reductions in many areas. However, we noted that the Secretary of Defense and the service secretaries needed to give greater structure to these efforts by developing a more definitive facility infrastructure plan. We said the plan needed to establish milestones and time frames and identify organizations and personnel responsible for accomplishing fiscal and operational goals. Presenting the plan to the Congress would provide a basis for the Congress to oversee DOD’s plan for infrastructure reductions and allow the affected parties to see what is going to happen and when. The need for such a plan is even more important given that the issue of eliminating excess capacity in the industrial facility area is likely to raise questions about the ability of DOD’s ability to accomplish this objective absent authority from the Congress for additional BRAC rounds. While the Congress has not approved additional BRAC rounds mainly due to concerns about the cost and savings, timing of new rounds, and other issues, it has asked DOD to provide it with information concerning the amount of excess capacity on its military installations and information on the types of military installations that would be recommended for closure or realignment in the event of one or more additional BRAC rounds. DOD’s report to the Congress on this subject provided most, but not all, of the information requested by the Congress. While this report indicates that significant excess capacity remains in the Army’s industrial facilities, more needs to be done to fully identify the extent of excess facilities before any future BRAC round. In particular, the services must identify opportunities to share assets, consolidate workloads, and reduce excess capacity in common support functions so that up-front decisions can be made about which service(s) will be responsible for which functions. We noted that resolution of these issues would require strong, decisive leadership by the Secretary of Defense. In another 1997 report, we recommended that the Secretary of Defense require the development of a detailed implementation plan for improving the efficiency and effectiveness of DOD logistics infrastructure, including reengineering, consolidating, and outsourcing logistics activities where appropriate and reducing excess infrastructure. In response, the Secretary of Defense stated that DOD was preparing a detailed plan that addressed these issues. In November 1997, the Secretary issued the Defense Reform Initiative Report, which contained the results of the task force on defense reform established as a result of the Quadrennial Defense Review. While this report was a step in the right direction and set forth certain strategic goals and direction, it did not provide comprehensive guidance. Further, the report did not resolve long-standing questions concerning what work in the depots and arsenals is of such importance that it should be performed in-house. Sorting out this issue becomes even more complicated when one introduces the prospect of moving toward GOCO facilities, which seem to fall somewhere between a pure in-house and a total contracted-out operation. Also, for the depots, existing policies do not address the situation involving the proliferation of depot-like facilities at regional repair sites, within both the active and reserve components, and the impact that this proliferation has on excess capacity and increased costs to the government for its total maintenance activities and infrastructure. Uncertainties exist about the future economy and efficiency of depot and arsenal operations and the extent to which the functions they perform need to be performed by the government. In this context, recent experiences at the Army’s maintenance depots and arsenals indicate that the Army is facing multiple, difficult challenges and uncertainties in determining staffing requirements, and in improving the efficiency and effectiveness of its industrial activities. Further, the Army’s industrial facilities currently have significant amounts of excess capacity and that problem is aggravated because of the proliferation of maintenance activities below the depot level that overlap with work being done in the depots. Increased use of contractor capabilities without reducing excess capacity also affects this situation. Productivity limitations suggest the need to reengineer operations retained in-house to enable Army industrial activities to operate more economically and efficiently. The Army has inadequate long-range plans to deal with issues such as those currently affecting the Army’s industrial facilities. Such a plan would need to be developed in consultation with the Congress and within the applicable legislative framework in an effort to reach consensus on a strategy and implementation plan. We continue to believe such an effort is needed if significant progress is to be made in addressing the complex, systemic problems discussed in this report. We recommend that the Secretaries of Defense and the Army determine (1) the extent to which the Army’s logistics and manufacturing capabilities are of such importance that they need to be retained in-house and (2) the extent to which depot maintenance work is to be done at regular depots, rather than lower-level maintenance facilities. We recommend that the Secretary of the Army develop and issue a clear and concise statement describing a long-range plan for maximizing the efficient use of the remaining depots and arsenals. At a minimum, the plan should include requirements and milestones for effectively downsizing the remaining depot infrastructure, as needed, and an assessment of the overall impact from competing plans and initiatives that advocate increased use of private sector firms and regional repair facilities for depot-level workloads. If a decision is made to retain in-house capabilities, we also recommend that the Secretary of the Army develop a long-term strategy, with shorter term milestones for improving the efficiency and effectiveness of Army industrial facilities, that would, at a minimum, include those recommendations stated in chapters 2 through 4 of this report. DOD concurred with each of our recommendations and discussed actions it has completed, underway, or planned as appropriate for each recommendation. Among the key actions that DOD identified are: a study to assess the Army’s overall maintenance support infrastructure to determine what functions need to be retained in-house to include its five depot-level repair activities and the recently expanded regional repair facilities; establishment of a board of directors to oversee and manage the Army’s total maintenance requirements process, including the allocation of work to in-house and contractor repair facilities; and development of a 5-year strategic plan for maximizing the efficient use of remaining maintenance depots and manufacturing arsenals. Fully implemented, these actions should lead to substantial improvements in the economy and efficiency of Army depot and arsenal operations.
Pursuant to a congressional request, GAO reviewed: (1) the Army's basis for personnel reductions planned at its depots during fiscal years 1998-1999; (2) the Army's progress in developing an automated system for making maintenance depot staffing decisions based on workload estimates; (3) factors that may impact the Army's ability to improve the cost-effectiveness of its maintenance depot's programs and operations; and (4) workload trends, staffing, and productivity issues at the Army's manufacturing arsenals. GAO noted that: (1) the Army did not have a sound basis for identifying the number of positions to be eliminated from the Corpus Christi Depot; (2) this was particularly the case in determining the number of direct labor personnel needed to support depot workload requirements; (3) Army efforts to develop an automated workload and performance system for use in its depots have proceeded to the point that required certification to Congress of the system's operational capability is expected soon; (4) however, system improvements that are under way would enhance the system's capabilities for determining indirect and overhead personnel requirements in Army depots; (5) other issues and factors affecting the Army's basis for workload forecasting or the cost-effectiveness of its depot maintenance programs and activities are: (a) an increased reliance on the use of regional repair activities and contractors for work that otherwise might be done in maintenance depots; (b) declining productivity; (c) difficulties in effectively using depot personnel; and (d) nonavailability of repair parts; (6) use of the arsenals has declined significantly over the years as the private sector has assumed an increasingly larger share of their work; (7) according to Army officials, as of mid-1998, the Army's two weapons manufacturing arsenals used less than 24 percent of their industrial capacity, compared to more than 80 percent 10 years ago; and (8) the Army's depots and arsenals face multiple challenges and uncertainties, and the Army has inadequate long-range plans to guide its actions regarding its industrial infrastructure.
You are an expert at summarizing long articles. Proceed to summarize the following text: The Results Act is the centerpiece of a statutory framework provided by recent legislation to bring needed improvements to federal agencies’ management activities. (Other parts of the framework include the 1990 Chief Financial Officers Act, the 1995 Paperwork Reduction Act, and the 1996 Clinger-Cohen Act.) Under the Results Act, every major federal agency must now ask itself some basic questions: What is our mission? What are our goals and how will we achieve them? How can we measure our performance? How will we use that information to make improvements? The act forces federal agencies to shift their focus away from such traditional concerns as staffing and activity levels and toward the results of those activities. is included in VBA’s business plan, also included in VA’s fiscal year 1999 budget submission. In previous testimony before this Subcommittee, we noted that VBA’s planning process has been evolving. VBA first developed a strategic plan in December 1994, which covered fiscal years 1996 through 2001. The plan laid out VBA’s mission, strategic vision, and goals. For example, the vocational rehabilitation and counseling (VR) goal was to enable veterans with service-connected disabilities to become employable and to obtain and maintain suitable employment. In addition, a program goal was to treat beneficiaries in a courteous, responsive, and timely manner. However, as VA’s Inspector General noted, VBA’s plan did not include specific program objectives and performance measures that could be used to measure VBA’s progress in achieving its goals. In fiscal year 1995, VBA established a new Results Act strategic planning process that included business process reengineering (BPR). VBA began developing five “business-line” plans that corresponded with its major program areas: compensation and pension, educational assistance, loan guaranty, vocational rehabilitation and counseling, and insurance. Each business-line plan supplemented the overall VBA strategic plan—which VBA refers to as its business plan—by specifying program goals that are tied to VBA’s overall goals. Also, each business-line plan identified performance measures that VBA intended to use to track its progress in meeting each plan’s goals. In VBA’s fiscal year 1998 budget submission, VBA set forth its business goals and measures, most of which were focused on the process of providing benefits and services, such as timeliness and accuracy in processing benefit claims. As with last year’s business plan, VBA’s fiscal year 1999 business plan continues to focus primarily on process-oriented goals and performance measures. VBA is, however, developing more results-oriented goals and measures for its five benefit programs. VBA officials consider this initial effort, which it hopes to complete by this summer, to be an interim step; final results-oriented goals and measures will be developed following program evaluations and other analyses, which VBA plans to conduct over the next 3 to 5 years. To help achieve its program goals, VBA has efforts under way to coordinate with other agencies that support veterans’ benefit programs; these efforts will need to be sustained to ensure quality service to veterans. VBA also faces significant challenges in setting clear strategies for achieving the goals it has established and in measuring program performance. For example, VBA considers its BPR efforts to be essential to the success of key performance goals, such as reducing the number of days it takes VBA to process a veteran’s disability compensation claim. VBA is, however, in the process of reexamining BPR implementation; at this point, it is unclear exactly how VBA expects reengineered processes to improve claims processing timeliness. VBA is also in the process of identifying and developing key data it needs to measure its progress in achieving specific goals. At the same time, VBA recognizes, and is working to correct, data accuracy and reliability problems with its existing management reporting systems. In its fiscal year 1999 business plan, VBA has realigned its goals and measures to better link with VA’s departmentwide strategic and performance plans. In keeping with the overall structure of VA’s strategic and performance plans, each business-line plan has been organized into two sections. The first section—entitled “Honor, Care, and Compensate Veterans in Recognition of Their Sacrifices for America”—is intended to incorporate VBA’s results-oriented goals in support of VA’s efforts to do just that. The second section, entitled “Management Strategies,” incorporates goals related to customer satisfaction, timeliness, accuracy, costs, and employee development and satisfaction. This structure more clearly highlights the need to focus on program results as well as on process-oriented goals. satisfaction with VBA’s efforts. VBA has also made some progress in developing results-oriented goals and measures for two of its five programs—VR and housing. In our assessments of VA’s strategic planning efforts, we determined that perhaps the most significant challenge for VA is to develop results-oriented goals for its major programs, particularly for benefit programs. As VBA notes in its business plan, the objective of the VR program is to increase the number of disabled veterans who acquire and maintain suitable employment and are considered to be rehabilitated. To measure the effectiveness of vocational rehabilitation program efforts to help veterans find and maintain suitable jobs, VBA has developed an “outcome success rate,” which it defines as the percentage of veterans who have terminated their program and who have met accepted criteria for program success. One major goal of VBA’s loan guaranty—or housing—program is to improve the abilities of veterans to obtain financing for purchasing a home. The outcome measure VBA established for this goal is the percentage of veterans who say they would not have been able to purchase any home, or would have had to purchase a less expensive home, without a VA-guaranteed loan. While the results-oriented goals and measures VBA has developed to date are a positive first step, they do not allow VBA to fully assess these programs’ results. The VR outcome success rate, for example, focuses only on those veterans who have left the program, rather than on all applicants who are eligible for program services. This success rate also does not consider how long it takes program participants to complete the program. In addition, by relying on self-reported data from beneficiaries, the housing outcome measure does not provide objective, verifiable information on the extent to which veterans are able to obtain housing as a result of VBA’s housing program. which veterans are using their earned education benefit, rather than on program results. One of the purposes of this program is to extend the benefits of a higher education to qualifying men and women who might not otherwise be able to afford such an education. A results-oriented goal would focus on issues such as whether the program indeed provided the education that the veteran could not otherwise have obtained. One measure VBA could use to assess its progress in achieving this goal would be the extent to which veterans have obtained a college degree or otherwise completed their education. In the past, VA has cited the lack of formal program evaluations as a reason for not providing results-oriented goals for many of its programs. Evaluations can be an important source of information for helping the Congress and others ensure that agency goals are valid and reasonable, providing baselines for agencies to use in developing performance goals and measures, and identifying factors likely to affect agency performance. VBA officials told us they now plan to develop results-oriented goals and measures for its three other programs—disability compensation and pensions, education benefits, and insurance coverage—by this summer. They consider these goals and measures—as well as those already developed for the VR and housing programs—to be interim, with final goals and measures to be developed following the completion of evaluations and analyses, which they plan to conduct over the next 3 to 5 years. In focusing on program results, VBA will need to tackle difficult questions in consultation with the Congress. For example, the purpose of the disability compensation program is to compensate veterans for the average loss in earning capacity in civilian occupations that results from injuries or conditions incurred or aggravated during military service. Given this program purpose, results-oriented goals would focus on issues such as whether disabled veterans are indeed being compensated for average loss in earning capacity and whether VBA is providing compensation to all those who should be compensated. However, we have reported that the disability rating schedule, which has served as a basis for distributing compensation among disabled veterans since 1945, does not reflect the many changes that medical and socioeconomic conditions may have had on veterans’ earning capacity over the last 53 years. Thus, the ratings may not accurately reflect the levels of economic loss that veterans currently experience as a result of their disabilities. Issues such as whether veterans are being compensated to an extent commensurate with their economic losses are particularly sensitive, according to VBA officials, and for that reason, they plan to consult with key stakeholders—including the Congress and veterans’ service organizations—over the next few months about the interim goals and measures VBA is developing. This will continue the consultative process, which VA officials, including those from VBA, began last year as part of VA’s efforts to develop a departmentwide strategic plan. As VBA develops more results-oriented goals and measures, it also needs to ensure that it is coordinating efforts with other parts of VA as well as federal and state agencies that support veterans’ benefits programs. For example, our work has shown that state vocational rehabilitation agencies, the Department of Labor, and private employment agencies also help veterans find employment once they have acquired all of the skills to become employable; VA has contracted for quality reviews of higher education and training institutions that have already been reviewed by the Department of Education; VBA relies on the Department of Defense for information about veterans’ military service, including their medical conditions, to help determine eligibility for disability compensation, vocational rehabilitation, and educational assistance programs; and in determining the eligibility of a veteran for disability compensation, VBA usually requires the veteran to undergo a medical examination, which is generally performed by a VHA physician. letter outlining their benefits and the requirements for maintaining their eligibility. VBA also is working with VHA to improve the quality of the disability exams VHA physicians conduct; the lack of adequate exams has been the primary reason why appealed disability decisions are remanded to VBA. VBA will need to continue to coordinate with the organizations that are critical to veterans’ benefits programs to ensure overall high-quality service to veterans. In addition to requiring an agency to identify performance goals and measures, the Results Act also requires that an agency highlight in its annual performance plan the strategies needed to achieve its performance goals. Without a clear description of the strategies an agency plans to use, it will be difficult to assess the likelihood of the agency’s success in achieving its intended results. A clear strategy would identify specific actions, including implementation schedules, that the agency was taking or planned to take and how these actions would achieve intended results. VBA is in the early stages of developing clear and specific strategies. While it has identified numerous functions and activities as its strategies, VBA has not clearly demonstrated how these efforts will lead to intended results. For example, in its current business plan, VBA consistently refers to BPR as the key to achieving its performance goals. VBA states that with the implementation of BPR, it will reduce the time it takes to complete an original claim for compensation to an average of 53 days from the current estimate of 106 days. However, VBA does not describe the specific actions needed, set a timetable for implementing needed changes, or show a clear link between BPR initiatives and reduced processing times. According to VBA officials, efforts to implement BPR are still under way and are now being reassessed. A major challenge VBA faces in developing clear and specific strategies for achieving performance goals will be effectively using BPR to identify what actions are needed to achieve performance goals and explain how these actions will lead to the intended results. Results Act, agencies are expected to use the performance and cost data they collect to continuously improve their operations, identify gaps between their performance and their performance goals, and develop plans for closing performance gaps. However, in developing its performance measures, VBA has identified numerous data gaps and problems that, if not addressed, will hinder VBA and others’ ability to assess VBA’s performance and determine the extent to which it is achieving its stated goals. For example, one goal is to ensure that VBA is providing the best value for the taxpayers’ dollar; however, VBA currently is unable to calculate the full cost of providing benefits and services to veterans. VBA’s ability to develop complete cost information for its program activities hinges on the successful implementation of its new cost accounting system, Activity Based Costing, currently under development. In addition, VBA plans to measure and assess veterans’ satisfaction with the programs and services VBA provides. The data VBA needs to make this assessment, however, will not be available until VBA implements planned customer satisfaction surveys for two of its five programs—VR and educational assistance. In addition, VBA’s recently appointed Under Secretary for Benefits has raised concerns about the accuracy of data contained in VBA’s existing management reporting systems. Moreover, completed and ongoing IG audits have identified data system internal control weaknesses and data integrity problems, which if not corrected will undermine VBA’s ability to reliably measure its performance. In its fiscal year 1996 audit of VA’s financial statements, for example, the Inspector General reported that the accounting system supporting the housing program does not efficiently and reliably accumulate financial information. The Inspector General believes the system’s deficiencies have the potential to adversely affect VBA’s ability to accurately and completely produce reliable financial information and to effectively audit system data. Also, an ongoing IG audit appears to have identified data integrity problems with certain performance data, according to VBA officials. Specifically, in assessing whether key claims processing timeliness data are valid, reliable, and accurate, IG auditors found instances where VBA regional office staff were manipulating data to make their performance appear better than it in fact was. VBA officials told us they are in the process of assessing the data system’s vulnerabilities so they can take steps to correct the problems identified. Mr. Chairman, this completes my testimony this morning. I would be pleased to respond to any questions you or Members of the Subcommittee may have. Agencies’ Annual Performance Plans Under the Results Act: An Assessment Guide to Facilitate Congressional Decisionmaking (GAO/GGD/AIMD-10.1.18, Feb. 1998). Vocational Rehabilitation: Opportunities to Improve Program Effectiveness (GAO/T-HEHS-98-87, Feb. 4, 1998). Managing for Results: Agencies’ Annual Performance Plans Can Help Address Strategic Planning Challenges (GAO/GGD-98-44, Jan. 30, 1998). The Results Act: Observations on VA’s August 1997 Draft Strategic Plan (GAO/T-HEHS-97-215, Sept. 18, 1997). The Results Act: Observations on VA’s June 1997 Draft Strategic Plan (GAO/HEHS-97-174R, July 11, 1997). Veterans Benefits Administration: Focusing on Results in Vocational Rehabilitation and Education Programs (GAO/T-HEHS-97-148, June 5, 1997). The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven (GAO/GGD-97-109, June 2, 1997). Veterans’ Affairs: Veterans Benefits Administration’s Progress and Challenges in Implementing GPRA (GAO/T-HEHS-97-131, May 14, 1997). Veterans’ Employment and Training Service: Focusing on Program Results to Improve Agency Performance (GAO/T-HEHS-97-129, May 7, 1997). Agencies’ Strategic Plans Under GPRA: Key Questions to Facilitate Congressional Review (GAO/GGD-10.1.16, ver. 1, May 1997). Managing for Results: Using GPRA to Assist Congressional and Executive Branch Decisionmaking (GAO/T-GGD-97-43, Feb. 12, 1997). VA Disability Compensation: Disability Ratings May Not Reflect Veterans’ Economic Losses (GAO/HEHS-97-9, Jan. 7, 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Veterans Benefits Administration's (VBA) implementation of the Government Performance and Results Act of 1993. GAO noted that: (1) VBA continues to make progress in setting goals and measuring its programs' performance but faces significant challenges in its efforts to successfully implement the Results Act; (2) VBA has efforts under way to address these challenges, which if continued will help ensure success; (3) for example, VBA is in the process of developing results-oriented goals and measures for each of its programs in response to concerns that GAO and others have raised; (4) developing more results-oriented goals and measures will require VBA to address difficult and sensitive questions regarding specific benefit programs, such as whether disabled veterans are being compensated appropriately under the existing disability program structure; (5) to address these questions, VBA is continuing its consultations with Congress, begun last year in conjunction with the Department of Veterans Affairs (VA) strategic planning efforts; (6) VBA also has efforts under way to coordinate with agencies that support veterans' benefits programs, such as the Department of Defense, in achieving specific goals; (7) to successfully implement the Results Act, VBA must also develop effective strategies for achieving its performance goals and ensure that it has accurate, reliable data to measure its progress in achieving these goals; (8) VBA is in the early stages of developing clear and specific strategies but has not yet clearly demonstrated how these strategies will help it achieve the intended results; (9) morever, VBA does not yet have the data needed to effectively measure its performance in several key areas; (10) for example, one goal is to ensure that VBA is providing the best value for the taxpayer dollar; however, VBA currently is unable to calculate the full cost of providing benefits and services to veterans; (11) in addition, VBA officials and VA's Inspector General (IG) have raised concerns about the accuracy of data VBA is currently collecting; (12) for example, completed ongoing IG audits have identified data integrity problems with VBA's claims processing timeliness data; and (13) VBA is currently determining how best to address these concerns.
You are an expert at summarizing long articles. Proceed to summarize the following text: Created in 1961 to counter hijackers, the organization that is now FAMS was expanded in response to the September 11, 2001, terrorist attacks. On September 11, 2001, 33 air marshals were operating on U.S. flights. In accordance with the Aviation and Transportation Security Act (ATSA), enacted in November 2001, TSA is authorized to deploy federal air marshals on every passenger flight of a U.S. air carrier and is required to deploy federal air marshals on every flight determined by the Secretary of Homeland Security to present high security risks—with nonstop, long distance flights, such as those targeted on September 11, 2001, considered a priority. Since the enactment of ATSA, FAMS staff grew significantly and, as of July 2016, FAMS employed thousands of air marshals. FAMS received an increase in appropriations each fiscal year from 2002 through 2012—peaking at an appropriation of approximately $966 million in fiscal year 2012. However, since 2012, FAMS has experienced a reduction in amounts appropriated. Specifically, FAMS received appropriations amounting to approximately $908 million in fiscal year 2013, $819 million in fiscal year 2014, and $790 million in fiscal year 2015. Of these appropriations, TSA expenditures for FAMS training were about $1.7 million, $4.4 million, $6 million, and $4.8 million in fiscal years 2012, 2013, 2014 and 2015, respectively. According to FAMS officials, due in part to reductions in its appropriations, FAMS hired no new air marshals during fiscal years 2012 through 2015. However, FAMS received appropriations amounting to $805 million for fiscal year 2016 (an increase of about $15 million over fiscal year 2015) and hired new air marshals in fiscal year 2016. FAMS and TSA’s OTD share responsibility for providing training to federal air marshals. OTD is primarily responsible for designing, developing, and evaluating all the training courses that air marshals receive. In addition, OTD delivers the training programs that are offered at TSATC and oversees training instructors assigned there. These training programs include, among others, FAMTP, discussed later in this report, and the field office training instructor program that is taught at TSATC. FAMS, in collaboration with OTD, develops training requirements for air marshal candidates and incumbent air marshals, serves as a subject matter expert to OTD in developing and evaluating new or proposed training courses, as well as operates and oversees the FAMS recurrent training program, which is taught by training instructors within each FAMS field office. To ensure that air marshals are fully trained and can effectively carry out FAMS’s mission, TSA established FAMTP. Air marshal candidates are required to successfully complete 16 and one-half weeks of training. After an initial one-week orientation at TSATC, air marshal candidates complete FAMTP in two phases. FAMTP-I is a seven week course in which new hires learn basic law enforcement skills at the Federal Law Enforcement Training Center in Artesia, New Mexico. On completing FAMTP-I, FAMS candidates complete FAMTP-II—an eight and one-half- week course at TSATC that is intended to teach air marshal candidates the knowledge, skills, and abilities necessary to prepare them for their roles as federal air marshals. Once air marshal candidates graduate from FAMTP-II, they report for duty at their assigned field office. As incumbent air marshals, they are required to complete 160 hours of recurrent training courses annually. FAMTP-II courses serve as the core of the recurrent training courses and incumbent air marshals receive these courses from training instructors in training facilities in or near their respective field offices. These recurrent training courses are intended to ensure air marshals maintain and enhance perishable tactical skills that are deemed critical to the success of FAMS’s mission. FAMS recurrent training includes both mandatory refresher courses that all air marshals must complete every year as well as a broad set of courses within several disciplines, as shown in figure 1, that field offices must ensure are incorporated into their annual or quarterly training plans. The mandatory courses include use of force, off-range safety, fire extinguisher use, and baton use. The remainder of air marshals’ annual recurrent training hours must include courses within each of the FAMS training disciplines, such as defensive measures, firearms, mission tactics, and physical fitness. FAMS also requires air marshals to pass quarterly firearms qualifications, complete biannual fitness assessments, and pass periodic medical exams, which are discussed later in this report. In October 2010, DHS issued its Learning Evaluation Guide to help the department’s learning and development community evaluate the effectiveness of its training activities. Among other things, the guidance identifies the Kirkpatrick model—a commonly accepted training evaluation model that is endorsed by the Office of Personnel Management in its training evaluation guidance—as a best practice. This model is commonly used in the federal government. The Kirkpatrick model consists of a four- level approach for soliciting feedback from training course participants and evaluating the impact the training had on individual development, among other things. The following is a description of what each level within the Kirkpatrick model is to accomplish: Level 1: The first level measures the training participants’ reaction to, and satisfaction with, the training program. A level 1 evaluation could take the form of a course survey that a participant fills out immediately after completing the training. Level 2: The second level measures the extent to which learning has occurred because of the training effort. A level 2 evaluation could take the form of a written exam that a participant takes during the course. Level 3: The third level measures how training affects changes in behavior on the job. Such an evaluation could take the form of a survey sent to participants several months after they have completed the training to follow up on the impact of the training on the job. Level 4: The fourth level measures the impact of the training program on the agency’s mission or organizational results. Such an evaluation could take the form of comparing operational data before and after a training modification was made. TSA’s primary method for assessing air marshals’ training needs is by holding Curriculum Development Conferences (CDC) and Curriculum Review Conferences (CRC). Specifically, OTD holds CDCs to determine whether to approve proposals to develop new training courses, and convenes CRCs to evaluate the effectiveness of existing FAMTP courses and, if appropriate, to make recommendations to address any identified shortcomings. These conferences are composed of OTD officials responsible for developing and implementing FAMTP courses and relevant subject matter experts, such as training instructors, field office Supervisory Air Marshals-in-Charge (SACs), SFAMs, and air marshals. According to OTD guidance and consistent with Federal Law Enforcement Training Accreditation Board standards, CDCs are to be held prior to the development of new training programs and CRCs are to be held no less than every three years or sooner if directed by FAMS management. CDCs can also be held in response to directives from OTD or FAMS management and to requests for additional training from FAMS personnel. As part of the CDCs and CRCs, OTD conducts assessments to determine the extent that existing FAMTP courses are current with existing or planned FAMS policy, procedures, or new equipment or technology, and address the known threat environment, and air marshals’ training needs. When doing so, OTD considers various sources of information including, among others, its job task analysis; training-related concerns raised by field office focus groups; feedback from air marshal candidates, training instructors and other subject matter experts; and intelligence. According to OTD guidance, this information is primarily to be gathered as described below. TSATC Training Evaluation Surveys: OTD’s student critique review program evaluates FAMTP training courses delivered at TSATC consistent with Kirkpatrick levels one and three. Under this program, OTD solicits and reviews feedback from air marshal candidates on the quality of the FAMTP courses that they complete at TSATC and from newly graduated air marshals on the extent that these courses effectively prepare them for their duties. Specifically, consistent with Kirkpatrick level 1, OTD requires air marshal candidates to complete a course evaluation on the effectiveness of the course and the quality of the instructor and facility immediately after completing the course. Further, consistent with Kirkpatrick level 3, TSATC surveys newly graduated air marshals 10 to 12 months after they have graduated from FAMTP-II, and their supervisors within 12 months of their graduation, to obtain their feedback on the extent that the training adequately prepared the FAMTP graduates to successfully perform their mission. In addition, the program provides OTD with feedback from air marshal candidates and newly graduated air marshals on the effectiveness of FAMTP curriculum, instructor performance, and TSATC facility or safety, or other related issues. This feedback is used by CDCs and CRCs to identify training gaps and determine how to appropriately address them. However, as described later in this report, response rates by air marshals on these surveys have been low. TSATC Examinations and Simulations: Consistent with Kirkpatrick level 2, OTD requires air marshal candidates to pass written exams or job simulations in order to advance through FAMTP. Specifically, air marshal candidates must demonstrate that they possess the knowledge, cognitive, or physical skills that classroom courses are intended to impart by passing examinations. OTD has developed evaluation tools, such as checklists, that TSATC training instructors must use to objectively determine air marshal candidates’ proficiency in law enforcement tactics and techniques such as marksmanship, defensive tactics, arrest procedures, and decision-making. OTD collects and analyzes the data on newly hired air marshals’ performance to determine the extent to which air marshal candidates have mastered the learning objectives with each FAMTP course and to identify any areas in the curriculum that may need revision. For example, OTD officials stated that they may revise examination questions in response to a relatively high number of air marshal candidates failing a question or series of questions due to poor wording. Furthermore, OTD uses these data to identify and address any training needs not met by the existing FAMTP curriculum when carrying out CRCs and CDCs. In addition to using surveys and examinations when evaluating FAMTP curriculum provided at TSATC, OTD officials noted additional information sources they use when evaluating FAMTP curriculum, including field office training assessment teams and quarterly training teleconferences. Field Office Training Assessment Teams: OTD established field office assessment teams, which consist of TSATC instructors, to assess field office training programs and their instructors. As described earlier, field office training programs primarily provide the recurrent training that incumbent air marshals are required to fulfill each year. In advance of the assessment team’s visit, TSATC sends surveys to supervisors and air marshals in the field with questions on the effectiveness of the field office’s training program, including its training instructors and facilities, as well as FAMS’s training curriculum. According to OTD officials, when conducting assessments at the field offices, team members are to observe field office trainers in class to ensure that FAMTP courses are taught uniformly across all FAMS field offices. They also are to review the field office’s training records and policies and procedures to ensure the field office’s training program is in compliance with OTD and FAMS policies, and when necessary, to make recommendations for improvement. For example, OTD officials told us that an assessment team discovered a field office whose training staff were using unapproved “dynamic fighting” tactics to teach air marshals how to fend off multiple attackers when cornered, which had resulted in many severe injuries. In this case, the assessment team halted use of the unapproved scenarios and provided approved lesson plans that taught air marshals to counter multiple attackers. OTD officials stated that training assessment team visits also provide opportunities for TSATC trainers to engage directly with field office trainers and air marshals to share new best practices and identify any unmet training needs. However, as we discuss later in this report, OTD has not sent assessment teams to evaluate field office training programs since March 2013. Quarterly Training Teleconferences: OTD holds quarterly conference calls between TSATC staff, FAMS headquarters training staff, and field office training staff to discuss service-wide training issues. According to OTD officials, these teleconferences provide opportunities to elicit feedback from trainers on unmet training needs and any challenges in delivering training, and to share best practices among the field offices. OTD conducts surveys to obtain feedback from air marshal candidates and newly graduated air marshals on the effectiveness of FAMTP courses they complete at TSATC and the quality of TSATC trainers and facilities, consistent with Kirkpatrick levels 1 and 3. However, OTD does not also obtain such feedback from incumbent air marshals after they complete their recurrent training courses at their respective field offices. Our previous work on federal training programs, as well as DHS’s Learning Evaluation Guide, has found that implementing a balanced multi-level systematic approach to evaluate and develop training, such as the Kirkpatrick model, can provide agencies with varied data and perspectives on the effectiveness of training efforts necessary to identify problems and improve training and development programs as needed. In addition, our work has also shown that agencies should ensure that they incorporate a wide variety of stakeholder perspectives in assessing the impact of training on employee and agency performance. OTD officials stated that conducting level 1 and 3 evaluations for air marshal candidates and newly graduated air marshals has provided sufficient feedback to reliably identify all air marshals’ training needs because the agency has taken steps to ensure that the content and quality of training for air marshals candidates is identical to that of recurrent training for incumbent air marshals. However, FAMS did not hire any new air marshals from fiscal years 2012 through 2015. As a result, TSA has not systematically gather feedback on the effectiveness of FAMTP training curriculum from air marshals for approximately four years. Over this time period, OTD has revised the training curriculum, such as adding a course on personal security when overseas and expanding the number of courses within the legal and investigative discipline to cover all transportation modes. Moreover, while the minimum skill requirements may be the same for both air marshal candidates and incumbent air marshals in the field, the training needs for both groups may not necessarily be identical. With greater experience in carrying out missions, incumbent air marshals may have a better idea of their training needs than air marshal candidates or newly graduated air marshals, which could result in more experienced incumbent air marshals providing different feedback on the quality of the training. Further, although incumbent air marshals take many of the same training courses as air marshal candidates, they do so at different facilities and with different instructors. OTD officials also stated that field office training assessments and quarterly training teleconferences provide additional opportunities to both ensure that the training all air marshals receive is standardized across the service and to obtain incumbent air marshal feedback. However, OTD has not sent assessment teams to evaluate field office training programs since March 2013 due, in part, to a lack of resources. OTD officials reported that they plan to resume field office training assessments during fiscal year 2017 and conduct assessments at 10 FAMS field offices per year if sufficient funding is available. These officials also reported that OTD plans to increase the frequency of training teleconferences between TSATC and field office training programs from a quarterly to monthly basis and invite field office leadership—SACs and Assistant Supervisory Air Marshals-in-Charge (ASACs)—to participate in these meetings. Nevertheless, our review suggests that OTD could benefit from broadening its efforts to gather feedback on recurrent training courses. First, field office staff we interviewed at the seven field offices we visited stated that improvements to training could better prepare them for their roles. For example, SFAMs and training staff in four of the seven field offices we visited stated that the training curriculum is overly focused on the training needs of air marshal candidates and newly graduated air marshals. Staff from five of the seven field offices also identified advanced training courses beyond those currently provided that they believed should be offered to incumbent air marshals, in areas such as firearms, defensive, or medical training. Second, field office staff at all seven field offices we visited identified training that should be revised, expanded, or added, to include topics such as active shooter response, counter surveillance and behavior detection techniques, training on improvised explosive devices and other explosives, and expanded legal and investigative training, among others. These sources also told us that the curriculum did not adequately address changes in their responsibilities over time, which include a broader set of current threats such as improvised explosive devices or FAMS-specific training on active shooters. OTD officials stated that they believed the current FAMTP curriculum adequately addresses the types of additional training that field office staff identified and that the curriculum has been designed to meet the needs of air marshals at all experience levels and may be consistently and safely delivered to the entire workforce. However, without a mechanism to systematically collect and incorporate feedback on field- based training for incumbent air marshals, consistent with Kirkpatrick level 1 and 3, OTD could miss important opportunities to identify problems and improve overall training and development. When OTD administered surveys to obtain feedback on the FAMTP-II and field-based training, the response rates were substantially lower than the 80 percent rate OMB encourages for federal surveys that require its approval. Specifically, about 19 to 38 percent of air marshals that graduated from FAMTP-II and their supervisors responded to the surveys that TSATC administered from 2009 through 2011—the last 3 full years in which FAMS hired air marshals. Additionally, according to OTD officials, the combined response rates for the surveys that training assessment teams conducted from June 2012 through March 2013 was about 16 percent. OTD staff acknowledged that the response rate to these surveys have been consistently low, but stated that the low response rates have not significantly affected the usefulness of the surveys. According to OTD staff, with regard to the FAMTP-II surveys, they received a sufficient number of responses to successfully evaluate the extent that FAMTP courses have met all air marshals’ training needs. However, OMB guidance stipulates that agencies must design surveys to achieve the highest practical rates of response to ensure that the results are representative of the target population and that they can be used with confidence as input for informed decision-making. The guidance also states that response rates are an important indicator of the potential for a bias called nonresponse bias, which could affect the accuracy of a survey’s results. In general, as a survey’s response rate increases, the likelihood of a bias problem decreases, and, therefore, the views and characteristics of the target population are more accurately reflected in the survey’s results. OMB guidance also describes the methods agencies can use to improve the response rate of future surveys, including conducting outreach to groups of prospective respondents, ensuring that the survey is well- designed and brief, providing alternative modes to provide responses, conducting nonresponse follow-up efforts, and extending cut-off dates for survey completion. OTD officials reported that they have taken several of these actions to improve the response rates of the FAMTP-II surveys, but have had little success in improving their response rate. Specifically, officials stated that TSATC instructors and staff discussed the surveys and their importance to improving future course offerings in class. In addition, OTD officials reported designing the survey to be as brief as possible, making it accessible via the internet and air marshals’ handheld devices, sending out follow-up reminders to survey respondents via e-mail and telephone, and contacting non-respondents’ field office supervisors. OTD officials told us that the low response rates may be attributable to “survey fatigue” given the high number of surveys that TSA employees are asked to complete and stated that there was little more that they could do to persuade air marshals to respond. Although OTD officials reported taking several of the actions that OMB recommends for agencies to improve survey response rates, additional actions could improve the response rate of future OTD surveys, including those administered to the air marshals FAMS hired this year. For example, monitoring future survey response rates by field office could help OTD identify and then target extra follow-up efforts to air marshals and their supervisors in field locations that have comparatively low response levels. Further, extending the cut-off date for air marshals and their supervisors to respond to the survey, or requiring survey respondents to complete the surveys, could help improve response rates to future surveys. Until OTD achieves sufficient response rates, OTD cannot be reasonably assured that the feedback it received represents the full spectrum of views held by air marshals or their supervisors. Achieving an adequate response rate is important, particularly as FAMS’s CRC and CDC processes rely, in part, on the survey results to identify training gaps and determine how to appropriately address them. FAMS relies on its recurrent training program to help ensure incumbent air marshals’ mission readiness, but additional actions could strengthen FAMS’s ability to do so. First, FAMS does not have complete and timely data on the extent to which air marshals have fulfilled their recurrent training requirements. Second, FAMS evaluates incumbent air marshals’ proficiency in some, but not all, key skills using tools such as examinations or checklists. In addition, FAMS has established a new health, fitness, and wellness program as part of its recurrent training program—in part to address recent concerns with air marshals’ fitness and injury rates—but it is too early to gauge the program’s effectiveness. As shown in figure 2, FAMS requires air marshals to complete certain recurrent training requirements on a regular basis to ensure that air marshals maintain their proficiency in the knowledge, skills, and abilities that are needed to successfully carry out FAMS’s mission. However, FAMS does not have complete and timely data to ensure air marshals’ compliance with these training requirements. Senior OTD and FAMS officials responsible for developing and overseeing the recurrent training program, as well as field office SFAMs, training instructors, and air marshals at the field offices we visited, identified the importance of the FAMS training program to ensuring air marshals’ mission readiness. These personnel stated that air marshals are unique among their fellow law enforcement officers because air marshals lack regular on-the-job opportunities to actively utilize the knowledge, skills, and abilities they develop in training courses to address a key aspect of FAMS’s mission—defeating terrorist or other criminal hostile acts. Therefore, according to OTD and FAMS officials, FAMS ensures air marshals’ mission readiness by monitoring the extent to which they have completed their recurrent training requirements. According to FAMS policy, field office SACs or their designees are responsible for ensuring that air marshals assigned to them have completed their recurrent training requirements and that the completion of these requirements is recorded in FAMS’s database—Federal Air Marshal Information System (FAMIS)—no later than 5 days after an air marshal has completed a training requirement. FAMS headquarters personnel within the Field Operations Division (Field Operations) generate reports in FAMIS detailing the extent to which air marshals have passed the practical pistol course, participated in physical fitness assessments, and completed their requisite number of recurrent training hours on a quarterly and annual basis. According to Field Operations staff, these personnel contact field office SACs or their designees when these reports identify air marshals that have not met their recurrent requirements. If field office staff report that the air marshal(s) have completed a requirement(s), but have not entered this information in FAMIS, Field Operations is to request appropriate documentation and update FAMIS. Field Operations officials stated they discuss with field offices why any air marshals have not completed their training requirements, such as illnesses, injuries, or scheduling issues, and, if necessary, the field office SAC is to take appropriate action. In addition, FAMS policy allows for air marshals to be exempted from training requirements when certain conditions, such as illness, injury, or military leave, are met and defines the process by which exemptions are to be requested and granted. Specifically, FAMS policy states that SACs must prepare a letter to the appropriate regional director to request approval of the exemption no later than 5 days after the end of a quarter. Field Operations officials reported that a FAMS headquarters staff person records the exemption into FAMIS once a regional director has approved the request. FAMS has processes for field office SACs to monitor which air marshals have completed their required recurrent training each year, as well as those who have received exemptions from such training. However, we found that the data used to track this information were not complete or readily available for purposes of tracking air marshals’ compliance with these requirements when we requested these data in March 2015. We reviewed training data from FAMIS’s training module for calendar year 2014 to determine the extent that air marshals have met their recurrent training requirements. Although we were ultimately able to determine that almost all of the air marshals met their training requirements or received an appropriate exemption in calendar year 2014, it was difficult to do so because data on both approved exemptions and training completions were missing or had not been entered in a timely manner. We found that nearly one-third of all training exemptions granted to air marshals in calendar year 2014 had not been entered into FAMIS. Specifically, at least 299 training exemptions granted to about 2 percent of air marshals had not been entered into FAMIS when we received the data in March 2015—nearly three months after the calendar year had ended. Additionally, we found that nearly one-quarter of all training records for calendar year 2014 had been entered into FAMIS more than 5 days after an air marshal had completed the training. FAMS headquarters officials responsible for reconciling recurrent training service-wide stated that these exemptions were not entered into FAMIS until July 2015—seven months after the calendar year ended. These officials told us that the delay was partly because FAMS took the database offline for three weeks in September 2014 to allow for an upgrade of the system. As a result, the staff person responsible for entering exemptions had become backlogged and later entered the backlogged exemptions into the database, in part, to reconcile the missing exemptions that were identified through our analysis of the 2014 training data. Additionally, FAMS officials responsible for reconciling completion of recurrent training service-wide reported that each quarter there are a significant number of air marshals for whom field office staff have not entered training records. According to these officials, at the end of every quarter, FAMS Field Operations staff must contact staff from several field offices to remind them to review and enter missing training records—a process that officials described as labor-intensive. In December 2015, FAMS officials provided us with the updated records for the air marshals whose exemptions had been entered into FAMIS as a result of our audit work to demonstrate that the air marshals’ 2014 recurrent training data had been corrected and were complete. TSA Office of Inspection (OOI) reports have found similar problems with monitoring, or timely and accurate recording, of air marshals’ training records. Specifically, OOI inspections of FAMS’s field offices completed during 2010 through 2015 found that three field offices had not accurately recorded air marshals’ training data or done so in a timely manner— issues FAMS had not identified through its training monitoring process. FAMS processes for monitoring the extent that air marshals service-wide have completed their recurrent training requirements have not ensured that air marshals’ training data are entered in a timely manner. These processes, as defined in FAMS policy, lack effective controls to ensure accountability. Specifically, FAMS has not specified in policy who has oversight responsibility at the headquarters level for ensuring that each field office has entered recurrent training data in a timely manner. Additionally, FAMS has not specified in policy who has oversight responsibility at the headquarters level for ensuring that headquarters personnel have entered air marshals exemptions into FAMIS within a defined timeframe. Federal Standards for Internal Control states that agencies should ensure that transactions and events are completely and accurately recorded in a timely manner, and are readily available for examination. Federal regulations require that agencies establish policies governing employee training including the assignment of responsibility to ensure the training goals are achieved. In addition, internal control standards state that in a good control environment, areas of authority and responsibility are clearly defined and appropriately documented through its policies and procedures, and appropriate lines of reporting are established. Given the number of training records that we found were incomplete or not entered into FAMIS in a timely manner, as well as the ongoing challenges that FAMS has faced in ensuring accurate and timely input of training and exemptions data as described in the OOI findings, policies that specify who is responsible at the headquarters level for overseeing these activities could help FAMS ensure its data on air marshals’ recurrent training are consistently accurate and up to date. Complete and readily available training and exemptions data would enable FAMS to more effectively determine the extent that air marshals service-wide have met their training requirements and are mission ready. Air marshals must demonstrate their proficiency in marksmanship by taking the practical pistol course on a quarterly basis and achieving a minimum score of 255 out of 300—the highest qualification standard for any federal law enforcement agency, according to FAMS officials. However, for the remainder of air marshals’ required recurrent training courses, FAMS does not assess air marshals against a similarly identified level of proficiency, such as by requiring examinations to evaluate air marshals’ knowledge in classroom-based courses or by using checklists or other objective tools to evaluate air marshals’ performance during simulation-based courses, such as mission tactics. For instance, FAMS’s recurrent training includes both mandatory refresher courses that all air marshals must complete annually as well as a broad set of courses within several disciplines that field offices must ensure are incorporated into their annual or quarterly training plans. However, FAMS does not require air marshals to take an examination for any course within these disciplines. Federal Standards for Internal Control states that agencies should establish expectations of competence for key roles, such as federal air marshals, to help the entity achieve its objectives, and that all personnel need to possess and maintain the relevant knowledge, skills, and abilities that allow them to accomplish their assigned duties. Additionally, GAO’s prior work on training and development states that in some cases, agencies may identify critical skills and competencies that are important to mission success, and require that employees meet requirements to ensure they possess needed knowledge and skills. Further, DHS’s Learning Evaluation Guide identifies testing or skill checklists as tools agencies can use to determine whether students have the knowledge and can perform the skills classes are designed to teach. The guide also states that learning activities that are skill-based, such as FAMS courses on tactical and defensive techniques, may require the development of skill checklists to determine the level of trainee proficiency. Field Operations officials said that it is not necessary to use examinations for recurrent training courses because air marshals are continuously evaluated by field office training instructors and SFAMs who participate in their training. In addition, officials stated that air marshals demonstrate their proficiency in the various cognitive or physical skills they must possess during simulations conducted as part of FAMS’s recurrent training program. As a result, according to officials, FAMS can be assured that any gaps in air marshals’ performance are identified and addressed in a timely manner. Field Operations officials also stated that checklists are also unnecessary because training instructors do not evaluate air marshals’ performance solely on whether their actions were appropriate and the air marshal correctly applied the relevant principles or tactics taught by course simulations. Rather, air marshals must also articulate why their actions were appropriate and how they applied the relevant principles or tactics. For example, to evaluate air marshals’ performance in mission tactics simulations, Field Operations officials stated that training instructors observe air marshals’ actions in response to various simulated threats ranging from verbal or physical assaults to the crew by passengers to suicide bombers. According to FAMS officials, training instructors evaluate the extent the actions taken by air marshals resulted in positive outcomes (i.e., protected the plane, passengers, and crew) and were carried out in accordance with applicable authorities, policies, procedures, and principles. Officials stated that part of this assessment is based on air marshals’ explanation for why their actions appropriately addressed the simulated threat and applied relevant FAMS principles and tactics. In addition, TSA has established a training instructor training program, which, according to FAMS Field Operations officials, ensures that training instructors are highly trained and certified; and, therefore, can assess air marshals’ performance in a reasonably objective manner. As previously discussed, for training courses taught at TSATC, OTD requires air marshal candidates and incumbent air marshals to demonstrate that they possess the knowledge or cognitive skills that classroom courses are intended to impart by passing examinations for training courses taught at TSATC. Additionally, when evaluating the performance of air marshal candidates and incumbent air marshals in courses taught at TSATC, OTD requires training instructors to use evaluation tools, including checklists. For example, TSATC training instructors must use these tools when evaluating air marshal candidates’ performance in defensive measures and mission tactics simulations as part of FAMTP-II. TSATC staff reported that they require TSATC training instructors to use such checklists because doing so better ensures air marshals are evaluated in an objective, fair, and consistent manner. Further, a field office SAC reported that given the absence of an objective tool for assessing air marshals’ performance in field-based training, such as defensive measures and tactics, there are air marshals who have not fully demonstrated the requisite level of proficiency, but still “passed” these courses and continued to fly missions. According to the SAC, air marshals are flying missions with colleagues they do not view as mission ready in part due to their performance in training courses—a concern raised by air marshals in 3 of the 7 field offices we visited. Finally, field office trainers in 3 field offices reported that a standardized tool for evaluating air marshals during training would help them to identify and address trainee deficiencies. FAMS Field Operations officials also noted that standardized examinations or checklists during trainings are not necessary because SFAMs have opportunities to continually assess their air marshals’ mission readiness by flying with their squads or attending training. However, we found that SFAMs infrequently attend training with their squads or accompany them on flying missions, although they are not necessarily required to do so. The July 2014 FAMS Advisory Council minutes state that the council unanimously agreed that a large population of SFAMs do not fully participate in their air marshals’ required training. Air marshals from 6 of 22 field offices raised similar concerns based on our review of the minutes from field office focus groups conducted in fiscal year 2014. In addition, SFAMs in all 7 field offices we visited reported that they rarely fly with their squads, i.e., once per quarter or less. Further, SFAMs in 6 of the 7 field offices stated that they rely on air marshals’ self-assessments and factors unrelated to mission readiness, such as quality of administrative paperwork (i.e., travel vouchers and timecards), and completion of OLC training to assess air marshals’ performance. Standardized methods for determining whether incumbent air marshals are mission ready, such as required examinations or evaluation tools, in key training courses could help provide better assurance that air marshals service-wide are mission ready. Objective and standardized methods of evaluating incumbent air marshals performance would better enable FAMS to assess air marshals’ proficiency in key skills and also more effectively target areas for improvement. In 2015, FAMS developed a new physical fitness program—the Health, Fitness, and Wellness Program—in part to address recent concerns with air marshals’ fitness and injury rates, but it is too early to gauge the program’s impact. Over the period 2008 to 2015, FAMS commissioned two studies to evaluate air marshals’ health and fitness, as well as a third study to evaluate air marshal fatigue and sleeplessness. FitForce, a consulting group, which conducted the first evaluation of air marshals’ fitness in 2009, found that nearly 32 percent of the FAMS who participated in the study exercised less than three times per week and almost 7 percent did not exercise at all. FitForce also concluded that physical fitness is a necessity for air marshals to be able to perform the essential functions of their job, and stated that FAMS should make a commitment to address the fitness needs of air marshals. Additionally, a 2012 sleep study conducted by Harvard University concluded that more than half of the air marshals who responded to the study’s survey were overweight and nearly one-third were obese, and, therefore, may suffer a variety of health issues that could directly impact mission readiness. Furthermore, FAMS conducted its own review of air marshals’ fitness from 2012 through 2013 and concluded that air marshals suffered from high injury rates and declining overall health and wellness, which FAMS officials attributed in part to the increasing age of air marshals. Specifically, the review found that the injuries that occurred while air marshals took their physical fitness assessment from 2010 through 2013 had resulted in approximately 8,060 lost or restricted work days and 12,896 lost mission opportunities and Office of Workers’ Compensation Program claims totaling over $1 million. We analyzed the scores that air marshals achieved in calendar year 2014 when taking the quarterly Mission Readiness Assessment (MRA)—the health evaluation program that FAMS had in place at that time. We found that, with the exception of the 1.5-mile run, the majority of air marshals who took the MRA met or exceeded each of the MRA component test goals. In quarters 2 through 4 of calendar year 2014, 84 to almost 90 percent of air marshals who participated in the MRA failed to meet the 1.5-mile run goal, as shown in figure 3. Moreover, about 5 percent of the air marshals did not meet the performance goals for any of the component tests in quarters 2 through 4 of calendar year 2014. To address the impact that air marshals’ declining health and fitness may pose to FAMS’s ability to carry out its mission within TSA, as well as air marshals’ injury rates, FAMS has developed the Health, Fitness, and Wellness Program, which went into effect in April 2016. According to FAMS policy, this program will include a revised fitness assessment—the Health and Fitness Assessment (HFA)—and a general health and wellness program. FAMS officials reported that air marshals are to complete the HFA on a biannual basis, but will not be required to meet performance goals for any of the HFA’s four components: cardiorespiratory endurance, muscular strength, muscular endurance, and flexibility. Rather, FAMS will use the results of air marshals’ first HFA to establish a fitness baseline and to take appropriate action to improve the performance of those who do not maintain their fitness levels or show improvement. According to FAMS officials, the agency decided not to require air marshals to meet the performance goals for the HFA tests because the results of the HFA cannot reliably determine the extent that an air marshal is physically capable of carrying out FAMS’s mission. Officials explained that FAMS had originally intended to require incumbent air marshals to meet a physical fitness standard similar to the HFA, but did not do so because of concerns raised by TSA’s Office of Human Capital and Office of the Chief Counsel. Specifically, according to FAMS officials, advice was provided regarding whether the proposed physical fitness standard could reliably predict an air marshal’s physical ability to carry out FAMS job-related mission, as well as if FAMS could demonstrate the business necessity (mission-related) of the standard. Because of these concerns, FAMS’s leadership decided to implement the Health, Wellness, and Fitness Program with a focus on reducing the incidence of air marshals’ injuries, reducing the number of exemptions air marshals needed to request from taking the HFA, increasing program participation, and improving air marshals’ overall health and wellness instead. FAMS officials stated that in addition to general improvement of air marshals’ health and fitness, a key benefit of the new program will be that air marshals will request and receive fewer exemptions because the HFA will allow air marshals to demonstrate their fitness through alternative means of testing. FAMS officials reported that, when taking the HFA, air marshals may choose one of three exercises to perform for five of the six subsets within the four components. For example, when taking the upper body subset of the muscular strength component, air marshals may choose to perform pull-ups, assisted pull-ups, or lateral pulldowns. According to FAMS officials, because multiple exercises will be available for each HFA component, FAMS will no longer grant air marshals exemptions from taking the HFA unless an injury prevents them from performing any of the HFA exercises. FAMS has established a goal for the Health, Wellness, and Fitness program—to provide the opportunity, resources, and education necessary to enhance mission readiness and promote workplace wellness, but it is too early to know if the program is achieving its intended goal. FAMS and OTD officials responsible for developing this program told us that FAMS plans to collect and analyze data on air marshals’ performance on the HFA over a period of about 12 to 18 months—two or three assessment periods. These officials stated that after FAMS had collected and analyzed sufficient data and established a baseline, the agency would be better positioned to collaborate with OTD to establish performance measures for the program. In the interim, FAMS plan to monitor data such as injury rates and the results of periodic physical exams. Given the unique operating environment of air marshals, it is vital that TSA ensure that air marshals’ training needs are identified and addressed, and that air marshals are mission ready. TSA does not systematically obtain feedback on the extent to which FAMTP courses meet incumbent air marshals’ training needs because officials state that they collect sufficient information from air marshal candidates on their training programs. However, by regularly collecting incumbent air marshals’ feedback on the recurrent training they receive in the field offices, OTD would better ensure it considers the input and experience of incumbent air marshals when assessing and refining their training programs. Also, by taking additional steps to improve the response rates for the training surveys it administers to air marshal candidates, incumbent air marshals, and their supervisors, OTD could be more reasonably assured that the feedback it receives represents the full spectrum of views held by its air marshal workforce. FAMS has established recurrent training requirements to ensure that air marshals maintain the knowledge, skills and abilities needed to carry out their mission. However, because FAMS processes have not ensured the timely and complete recording of training data—an ongoing challenge for FAMS—FAMS has been hindered in its ability to ensure air marshals’ compliance with training requirements. Specifying in policy who has oversight responsibility at the headquarters level for ensuring that each field office has entered air marshals’ training data in a timely manner and that headquarters personnel have entered air marshals’ exemptions into FAMIS could help FAMS better ensure its data on air marshals’ recurrent training are consistently complete and up to date. Such a policy could also enable FAMS to more effectively determine the extent that air marshals service-wide have met their training requirements and are mission ready. Additionally, by developing and implementing more objective and standardized methods of determining, in the course of their recurrent training, whether incumbent air marshals continue to be mission ready, FAMS could better assess their skills and also more effectively target areas for improvement. To ensure effective evaluation of air marshal training, we recommend that the TSA Administrator direct OTD to take the following two actions: implement a mechanism for regularly collecting and incorporating incumbent air marshals’ feedback on the training they receive from field office programs, and take additional steps to improve the response rates of the training surveys it conducts. To provide reasonable assurance that air marshals are complying with recurrent training requirements and have the capability to carry out FAMS’s mission, we recommend the TSA Administrator direct FAMS to take the following three actions: specify in policy who at the headquarters level has oversight responsibility for ensuring that field office SACs or their designees meet their responsibilities for ensuring that training completion records are entered in a timely manner, specify in policy who at the headquarters level is responsible for ensuring that headquarters personnel enter approved air marshals’ training exemptions into FAMIS, and define the timeframe for doing so, and develop and implement standardized methods, such as examinations and checklists, for determining whether incumbent air marshals continue to be mission ready in key skills. We provided a draft of this report to DHS for comment. In its written comments, reproduced in appendix II, DHS concurred with the five recommendations and described actions under way or planned to address them. DHS also provided technical comments that we incorporated, as appropriate. With regard to the first recommendation to implement a mechanism for regularly collecting and incorporating incumbent air marshals' feedback on the training they receive from field office programs, DHS concurred and stated that TSA has developed a survey to measure the effectiveness of air marshal training curriculum, field office training personnel, and training facilities. DHS also stated that this survey will be added to the TSA On-Line Learning Center where it can be distributed to air marshals and supervisors on a regular basis. According to DHS, TSA implemented the survey in the On-Line Learning Center in July 2016 and, beginning in October 2016, will send the survey to air marshals and supervisors after they complete a course at TSATC. TSA also plans for curriculum development and review committees to use the feedback from these surveys to improve courses offered at TSATC. These actions, if implemented effectively, should address the intent of our recommendation. With regard to the second recommendation to take additional steps to improve the response rates of the training surveys it conducts, DHS concurred and stated that future surveys of FAMTP graduates and their supervisors will be distributed to personnel through the On-Line Learning Center. DHS stated that the capabilities of the On-Line Learning Center will provide a tracking mechanism for program managers to ensure that personnel complete and submit the survey. According to DHS, survey reports will be compiled and sent to TSATC in a manner that maintains the anonymity of the respondent. TSA anticipates that this process will significantly improve response rates. These actions, if implemented effectively, should address the intent of our recommendation. DHS concurred with our third and fourth recommendations that FAMS specify in policy (1) who at the headquarters level has oversight responsibility for ensuring that field office SACs or their designees meet their responsibilities for ensuring that training completion records are entered in a timely manner, and (2) who at the headquarters level is responsible for ensuring that headquarters personnel enter approved air marshals' training exemptions into FAMIS and define the timeframe for doing so. In response to our recommendations, FAMS updated its policy on recurrent training requirements for air marshals to assign Regional Directors, who are based in headquarters, the responsibility for ensuring that field office SACs or their designees adhere to FAMS’s procedures for recording training completion. The updated policy also requires that FAMS’s Field Operations Division, Tactical Support Section, verify that FAMIS entries are made for all training exemptions within five business days of the approval of the exemptions. These actions, if implemented effectively, should address the intent of our recommendations. With regard to the fifth recommendation to develop and implement standardized methods, such as examinations and checklists, for determining whether air marshals continue to be mission ready in key skills, DHS concurred and stated that FAMS and OTD established a joint Integrated Project Team/Development Committee, which met in June 2016 to develop an assessment process that will be used to determine air marshals’ mission readiness. According to DHS, the joint Integrated Project Team/Development Committee consisted of representatives from seven FAMS field offices and FAMS headquarters as well as instructors and instructional design specialists from TSATC. DHS stated that the Integrated Project Team is drafting recommendations and that approved readiness measures will be implemented beginning in fiscal year 2018. This action, if implemented effectively, could address the intent of our recommendation. However, it is not clear to what extent this assessment process will include standardized methods for determining whether incumbent air marshals continue to be mission ready. We will continue to monitor TSA’s efforts. We are sending copies of this report to appropriate congressional committees, the Secretary of Homeland Security, the TSA Administrator, and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7141 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix III. This report addresses the following questions: How does the Transportation Security Administration (TSA) assess the training needs of air marshal candidates and incumbent air marshals, and what opportunities exist, if any, to improve this assessment? To what extent does the Federal Air Marshal Service (FAMS) ensure that incumbent air marshals are mission ready? This report is a public version of the prior sensitive report that we provided to you. TSA deemed some of the information in the report as sensitive security information, which must be protected from public disclosure. Therefore, this report omits this information, such as specific numbers of air marshals and specific types of training that air marshals reported needed to be added to FAMS’s training curriculum to address changes in air marshals’ responsibilities. Although the information provided in this report is more limited in scope in that it excludes such information, it addresses the same questions as the sensitive security information report and the methodology used for both reports is the same. To address the first objective, we reviewed TSA directives, guidance, and other relevant documentation describing TSA’s processes for developing and evaluating Federal Air Marshal Training Program (FAMTP) training curriculum to determine how TSA evaluates existing courses and develops new courses within FAMTP and other relevant training programs. We interviewed senior officials responsible for these efforts in TSA’s Office of Training and Development (OTD). We also analyzed documentation on the results of training curriculum assessments OTD conducted to identify recommendations made to improve training and the extent to which OTD implemented the recommendations. OTD conducted these assessments from May 2007 through April 2014. We compared OTD’s training development and evaluation processes to key principles identified in DHS guidance on training evaluation, and GAO’s prior work on training and development, specifically the Guide for Assessing Strategic Training and Development Efforts in the Federal Government. We also reviewed the minutes of the quarterly teleconferences held in fiscal years 2014 through 2015—the most recent time period for which the meeting minutes were available—between Transportation Security Administration Training Center (TSATC) staff, FAMS headquarters staff, and field office training staff to determine the types of issues discussed during these meetings. Additionally, we obtained the available response rates for surveys OTD conducted of FAMTP graduates and their supervisor’s on the effectiveness of FAMTP courses for calendar years 2009 through 2011—the last three full years that FAMS hired air marshals. In addition, we met with OTD officials to discuss the actions that had been taken to improve these response rates, and compared these actions to Office of Management and Budget standards and guidance for conducting surveys. Further, we visited the TSATC in Atlantic City, New Jersey and 7 of FAMS 22 field offices, which we selected, in part, to reflect a range in size (as determined by the number of air marshals assigned to the office) and geographic dispersion. At TSATC, we interviewed TSATC management and training instructors and toured the facility. At the field offices, we interviewed field office management, Supervisory Federal Air Marshals (SFAM), air marshals, and training instructors to obtain their views on the current training curriculum. The results of these interviews cannot be generalized to all field offices, but provide insight into the extent to which TSA is addressing air marshals’ training needs and ensuring their mission readiness. To address the second objective, we assessed FAMS directives that set forth training requirements for incumbent air marshals, and analyzed air marshals training data for calendar year 2014, which is the most recent year for which training data were available, to determine the extent to which air marshals met these requirements. We interviewed senior FAMS officials to understand how FAMS uses this information to ensure that air marshals are mission ready. We compared the results of our analyses to Standards for Internal Control in the Federal Government, the DHS Learning Evaluation Guide, and GAO’s prior work on training and development. We assessed the reliability of the 2014 training data by (1) reviewing documentation on the processes for entering air marshals’ training records into the Federal Air Marshal Information System (FAMIS), (2) performing electronic testing for obvious anomalies and comparing FAMIS data to FAMIS-generated reports on training completion, and (3) interviewing knowledgeable officials about training records and exemptions entered into FAMIS. Although the data FAMS originally provided were not complete or entered in a timely manner, over the course of our audit we identified missing data that FAMS corrected in response to our inquiries. Therefore, we found the data were reliable for the purposes of our report. Additionally, as previously discussed, we interviewed TSATC training instructors and FAMS field office personnel to obtain their perspectives on FAMS methods for ensuring that air marshals are mission ready. We also reviewed the most recent Management Assessment Program inspection report completed by TSA’s Office of Inspections for all 22 field offices to identify the training-related findings. Additionally, we reviewed the FAMS Advisory Council minutes and the field offices’ focus group minutes for fiscal year 2014—the most recent full year of information available at the time of our request—to identify the training related issues that FAMS personnel raised to their leadership. Finally, we reviewed the studies that FAMS conducted or commissioned to inform its development of its physical fitness program and assessment—a component of air marshals training. We interviewed FAMS and OTD officials responsible for developing and implementing FAMS’s health, wellness, and fitness program to determine how TSA plans to measure the effectiveness of the program. We conducted this performance audit from October 2014 to September 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Maria Strudwick (Assistant Director) and Michael C. Lenington (Analyst-in-Charge) managed this assignment. Jonathan Bachman, Claudia Becker, Juli Digate, Michele Fejfar, Imoni Hampton, Eric Hauswirth, Susan Hsu, Thomas Lombardi, and Minette Richardson made key contributions to this report.
FAMS, within TSA, is the federal entity responsible for promoting confidence in the nation's aviation system through deploying air marshals to protect U.S. air carriers, airports, passengers, and crews. GAO was asked to assess FAMS's training program for federal air marshals. This report examines (1) how TSA assesses the training needs of air marshal candidates and incumbent air marshals, and any opportunities that exist to improve this assessment, and (2) the extent to which FAMS ensures that incumbent air marshals are mission ready. GAO analyzed FAMS training data for calendar year 2014, the last year of available data, reviewed TSA, OTD and DHS guidance and policies on FAMS's air marshal training program, interviewed TSA and FAMS headquarters officials, and visited the TSA Training Center and 7 of FAMS 22 field offices selected based on size and geographic dispersion. The Transportation Security Administration's (TSA) Office of Training and Development (OTD) assesses air marshals' training needs using several information sources, but opportunities exist to obtain more feedback from air marshals on whether the training courses they must take met their needs. OTD primarily assesses air marshals' training needs by holding curriculum development and review conferences composed of OTD officials, training instructors, and other subject matter experts. In assessing courses, conference participants use, among other things, the results of surveys that some air marshals complete on the effectiveness of their training. However, while OTD administers these surveys for air marshal candidates and newly graduated air marshals, it does not use them to obtain feedback from incumbent air marshals on the effectiveness of their annual recurrent training courses. Systematically gathering feedback from incumbent air marshals would better position OTD to fully assess whether the training program is meeting air marshals' needs. Additionally, among the training surveys that OTD does currently administer to air marshals, the response rates have been low. For example, among newly hired air marshals and their supervisors from 2009 through 2011—the last three full years in which the Federal Air Marshal Service (FAMS) hired air marshals—the survey response rates ranged from 16 to 38 percent. Until OTD takes steps to achieve sufficient response rates, OTD cannot be reasonably assured that the feedback it receives represents the full spectrum of views held by air marshals. FAMS relies on its annual recurrent training program to ensure incumbent air marshals' mission readiness, but additional actions could strengthen FAMS's ability to do so. First, FAMS does not have complete and timely data on the extent to which air marshals have completed their recurrent training. For example, nearly one-quarter of all training records for calendar year 2014 had not been entered into FAMS's training database within the required time period. Policies that specify who is responsible at the headquarters level for overseeing these activities could help FAMS ensure its data on air marshals' recurrent training are accurate and up to date. Second, FAMS requires air marshals to demonstrate proficiency in marksmanship by achieving a minimum score of 255 out of 300 on the practical pistol course every quarter. However, for the remaining recurrent training courses FAMS does not assess air marshals' knowledge or performance in these courses against a similarly identified level of proficiency, such as by requiring examinations or by using checklists or other objective tools. More objective and standardized methods of determining incumbent air marshals' mission readiness, as called for by the Department of Homeland Security's (DHS) Learning Evaluation Guide, could help FAMS better and more consistently assess air marshals' skills and target areas for improvement. Additionally, in 2015 FAMS developed a health, fitness, and wellness program to improve air marshals' overall health and wellness, but it is too early to gauge the program's effectiveness. This is a public version of a sensitive report that GAO issued in June 2016. Information that TSA deems “Sensitive Security Information” has been removed. GAO recommends that OTD implement a mechanism for regularly collecting incumbent air marshals' feedback on their recurrent training, and take steps to improve the response rates of training surveys it conducts. GAO also recommends that FAMS specify in policy who at the headquarters level has oversight responsibility for ensuring that recurrent training records are entered in a timely manner, and develop and implement standardized methods to determine whether incumbent air marshals continue to be mission ready in key skills. DHS concurred with all of the recommendations.
You are an expert at summarizing long articles. Proceed to summarize the following text: In 1935, as part of the Social Security Act, Congress established two programs—Aid to Dependent Children and the federal-state system of unemployment insurance—to provide income support to two different groups of unemployed people. Aid to Dependent Children added federal support to state systems of pensions for widows with children. The UI program, on the other hand, aimed to provide workers with partial replacement of wages lost during temporary periods of unemployment due to economic causes. Historically, the majority of people who file for UI benefits have been men. As administered in subsequent years (when it became known as Aid to Families With Dependent Children or AFDC), the welfare program evolved into an open-ended entitlement program, providing cash assistance to people with children, usually single parents, who earned little income. Over time, as more women with children joined the labor force, AFDC recipients with older children were expected to look for work. More recently, several states experimented with stricter work requirements (the so-called “work first” philosophy) and time limits on the receipt of aid. In 1996, federal legislation known as the Personal Responsibility and Work Opportunity Reconciliation Act ended AFDC, alternatively providing block grants to the states as part of a new program called Temporary Assistance for Needy Families (TANF). The legislation put a maximum 5-year limit on the availability of federal cash assistance under TANFand required adults to work or participate in work-related activities after receiving assistance for 24 months as a condition for continuing to receive benefits. Between August 1996 and December 1999, the number of TANF families declined by approximately 2.1 million, and many new workers entered the labor force. Unlike the Aid to Dependent Children program, the UI program has always operated as a social insurance program. It is administered as a federal-state partnership. To finance the program, the states levy and collect payroll taxes from employers. The funds collected are managed in a trust fund administered by the federal government. In almost all industries, federal standards require coverage on all work for employers who pay wages of $1,500 or more in any calendar quarter. Today UI coverage is nearly universal, extending to almost all wage and salaried workers. Employers pay the premiums for the UI program through federal and state payroll taxes that are assessed on employers but based on employees’ earnings. Employers pay taxes on wages earned by even the lowest-paid worker. Additionally, if a worker held jobs with two different employers during the year, the wages from each job are taxed separately. The federal payroll tax, established by the Federal Unemployment Tax Act (FUTA), is currently set at 6.2 percent of the first $7,000 of an employee’s salary. In states with UI programs that meet specified federal guidelines, employers receive a 5.4 percent credit toward their FUTA tax payment, resulting in a net federal tax of 0.8 percent. These federal taxes finance the state and federal administrative costs of the UI program, as well as the federal portion of the Extended Benefit program, advances to states with insolvent trust funds, and other related federal costs. The actual rate of the state tax paid by individual employers depends upon the employer’s “experience rating”—a measure related to the amount of UI benefits collected by a firm’s employees. Depending upon the employer and the state, the state payroll tax may range from 0 to 10 percent. By federal law, state taxes are assessed against at least the first $7,000 of an employee’s salary. However, among the states, the wage base against which state taxes are assessed varies widely, from $7,000 (in 9 states) to $27,500 in Hawaii. The wage base is less than $11,000 in 32 states, thereby requiring the same tax whether, for example, employees earn $11,000 per year or $110,000 per year. Revenues from state UI taxes finance the payment of regular UI benefits and the state portion of the Extended Benefit program. Benefit coverage under the UI program is related to an individual’s work history. Generally, state law provides that unemployed workers must fulfill three general conditions: (1) they must have been “substantially attached” to the labor market; (2) they must have left their prior job involuntarily (such as by employer layoff) or have quit their job for “good cause” only; and (3) they must be currently “able and available” for work, and, in most states, actively seeking work. State law provides specific requirements for claimants to meet these general conditions. Overall, the percentage of the total unemployed population applying for UI benefits has gradually declined in the past 50 years. Several factors generally are cited as contributing to the decline in UI participation, although the significance of each is disputed. Three major factors have persisted over most of this period—reduction in manufacturing jobs, decline in union membership, and increasingly strict state UI eligibility requirements. Over the past 50 years, the percentage of unemployed filing for UI benefits has generally, but gradually, declined. The measure most commonly used by the Department of Labor to assess the effect of the UI program—the standard recipiency rate—shows that while about 50 percent of the unemployed filed for UI in the 1950s, only about 35 percent of the unemployed filed for UI in the 1990s. Although this rate has fluctuated considerably—for example, in 1980 the rate was 44 percent, then dipped to 29 percent in 1984, but by 1991 had increased to 39 percent—it indicates a general decline over the past 5 decades. In 1999, the recipiency rate was 37 percent. Figure 1 presents the average recipiency rate, by decade, since 1950. Of the past 5 decades, the last decade—1990 through 1999—had the most stable rate of UI claims, showing the least annual variation. Over this decade, an average of 35 percent of the unemployed filed for UI benefits— varying from a high of 39 percent in 1991 to a low of 31 percent in 1993, then increasing to 37 percent in 1999. Overall, the average recipiency rate in the 1990s was 1 percentage point higher than that of the 1980s. Although there is no agreement about the causes of the general decline in the rate of UI filing,certain factors are commonly considered significant, including (1) the decrease in the number of workers employed in manufacturing jobs; (2) the decline of union membership in the workforce; (3) increasingly tighter state requirements for UI eligibility; (4) federal taxation of UI benefits beginning in 1979; (5) population shifts, starting in the 1970s, of workers from northeastern states to southern states, where unemployed workers are less likely to apply for UI benefits; and (6) changes in the survey methodology of the CPS during the 1980s that increased the number of unemployed who were counted (changing the denominator used in calculating the recipiency rate).Of these factors, the first three affect the entire period of decline. Over the past 50 years, the number of workers in manufacturing jobs has declined in the United States, as has the number of workers who are union members. Studies suggest that the steady decline in workers in manufacturing jobs and in union membership has adversely affected the overall participation in the UI program.According to these studies, both the manufacturing industry and unions traditionally have encouraged labor practices that are treated favorably in UI programs. For example, union members are more likely to be laid off than fired—a practice that makes workers eligible for UI benefits. Manufacturing firms tend to have layoffs of large numbers of employees who are handled as a group by UI program officials. Further, both manufacturing workers and union members are more apt to be better informed about UI benefits. In the past 5 decades, many states have tightened their UI regulations, increasing limitations on eligibility for UI benefits and thereby decreasing the participation in the UI program. In general, in order to demonstrate that a person is an active member of the labor force, states have a series of tests dealing with a claimant’s recent work history, his or her reasons for termination from the last job, and evidence that the claimant is still available for work. For example, most states require that in order to establish that a person worked a sufficient amount of time to qualify for UI benefits, he or she must have earned a minimum amount of wages over a year’s time (a so-called “base period”). Over the years, many states have increased these earnings amounts, thereby limiting who can be eligible for UI benefits. Other limitations affect program participation as well. For instance, when the UI program was first established, people who quit their jobs for compelling personal reasons, such as pressing family obligations like lack of child care, were not disqualified from receiving UI benefits. Increasingly, however, states have enacted laws that specifically limit the generally acceptable reasons (“good cause”) for quitting a job to those related to work or to the employer.The number of states with such statutory restrictions grew from 16 in 1948 to 28 in 1979, and by 1995, 38 states restricted “good cause” for quitting to work-related circumstances. Under these restrictions, states generally allow a worker to collect UI benefits if a worker quit because of actions taken by an employer—if, for example, an employer requires the employee to work a night shift even though the employee had been hired specifically to work only during daytime hours. On the other hand, most states disqualify a claimant for UI if he or she quit a job because of a temporary lack of child care. Unemployed low-wage workers were less likely to collect UI benefits than other unemployed workers in the early 1990s, and the most recent evidence suggests that this trend continued throughout the decade. Unemployed workers were more apt to receive UI benefits if they worked longer than 35 weeks, worked full-time rather than part-time, or lived in a state that tended to have less strict eligibility criteria. However, even when low-wage workers and other workers shared characteristics that favored UI receipt— for example, when they worked more than 35 weeks—low-wage workers were less likely to collect UI. In March 1995, almost two-thirds of unemployed low-wage workers worked immediately before becoming unemployed in jobs in the retail trade or services industries,industries whose workers were the least likely to participate in the UI program. In contrast, only one-third of unemployed higher-wage workers held their last job in these industries. Although SIPP data were limited to the 4-year period between 1992 and 1995,other evidence suggests that these patterns remained throughout the entire decade. From 1992 to 1995, low-wage workers were twice as likely to be out of work as higher-wage workers but only half as likely to receive UI benefits. Table 1 compares the unemployment rates of low-wage workers with those of other workers in the early 1990s. During this period, low-wage workers made up about 50 percent of the unemployed former workers,even though they were only about 30 percent of the total labor force. Table 2 shows the rates at which low-wage workers received UI benefits while unemployed, as compared with the rates of higher-wage workers. Among unemployed workers who had worked for similar periods of time, low-wage workers were still less likely to receive UI benefits than higher- wage workers. As shown in table 3, nearly 35 percent of unemployed low- wage workers who had worked at least 35 weeks during the year collected UI. In contrast, about 62 percent of unemployed higher-wage workers who had worked at least the same number of weeks collected UI. Even when comparing full-time workers with substantial work histories, differences remained. Table 4 looks at unemployed people who had worked at least 35 weeks, grouped into those who had worked full-time and those who had worked part-time. As can be seen, among the people who had worked full-time for at least 35 weeks, a considerable difference continues between the percentages of low-wage and higher-wage unemployed workers who collected UI benefits. Although some states had greater participation among the unemployed in their UI programs—most of these tending to use less strict eligibility criteria that allow a greater percentage of unemployed to collect benefits— low-wage unemployed workers continued to be less likely to collect UI benefits than other unemployed workers, regardless of the states in which they lived. To group states, we used the Department of Labor standard recipiency rate as a rough gauge of the relative rates at which the unemployed used the state UI programs. As can be seen in table 5, even though states with high recipiency rates were more likely to pay UI benefits, low-wage workers in those states were still only about half as likely as higher-wage unemployed workers to collect UI benefits. Overall, low-wage unemployed workers were far more apt to have worked in retail trade and services and less apt to have worked in manufacturing, mining, or construction than higher-wage unemployed workers. Figure 2 shows the industry sector (based on the worker’s last job) for workers who were unemployed in March 1995. As shown, 64 percent of the low-wage unemployed workers had been previously engaged in jobs from retail trade and services, as opposed to 32 percent of higher-wage workers (primarily in the services industry). On the other hand, while 49 percent of the higher- wage unemployed workers had been employed in manufacturing, construction, or mining, only 23 percent of the low-wage workers had been employed in these industries. Wide variation exists among industry sectors in the rates at which unemployed workers collected UI benefits. In general, workers formerly associated with the retail trade or services industries were far less likely to receive UI benefits than were workers most recently employed in manufacturing, construction, or mining. Table 6 compares the rates among industries for workers unemployed in March 1995. As shown, 16 percent of former retail employees and 13 percent of former services employees collected UI benefits, while 39 percent of unemployed manufacturing workers and 58 percent of unemployed construction and mining workers collected benefits. Even with these variations among sectors, differences remained in the rates of UI receipt for unemployed low-wage workers and other workers in individual industry sectors. Among former services workers, though, both low-wage and higher-wage workers were far less likely to collect UI than were higher-wage workers in the other industry sectors. Although the available SIPP data for our purposes extended only to 1995, we concluded on the basis of our analysis of other data that the rate of UI receipt for low-wage unemployed workers most likely remained lower than that for other unemployed workers through the last half of the decade. This analysis combined two sets of data that were available for the entire decade: (1) CPS data showing the percentage of low-wage workers in the employed labor force and (2) Department of Labor data showing the percentage of all those collecting UI benefits who were low-wage workers. These two percentages were stable over the entire time period. From these factors, together with the likelihood of a higher rate of unemployment for low-wage workers, we inferred that the UI rate of receipt of low-wage workers remained lower than that of other workers. Between 1992 and 1995, SIPP data showed that the unemployment rate of low-wage workers was twice that of higher-wage workers. Our analysis of these other data showed that as long as the unemployment rate for low-wage workers continued to be substantially higher than that for other workers,the rate of UI receipt for low-wage unemployed workers would still have been lower than that for other unemployed workers in the last half of the 1990s. (See app. I for our analysis.) From other economic factors, it appears likely that the unemployment rate of low-wage workers remained higher than the unemployment rate (calculated for all workers) throughout the decade (even though the unemployment rate declined from 5.6 percent in 1995 to 4.2 percent in 1999) and that, therefore, the rate of UI receipt for low-wage workers remained lower than that for other workers. For example, low-wage workers were clustered in the same industries in the later 1990s that they were in during the early 1990s—about the same percentage (nearly 70 percent) of low-wage workers were employed in services and retail industries in 1997 as in 1992. In addition, while many welfare recipients joined the labor force and became employed during the latter half of the 1990s, many in low-wage jobs, it appears that they experienced higher than average unemployment rates. According to Department of Health and Human Services data, about 30 percent of those with jobs during the late summer 1998 were no longer employed by January 1999.Unemployment rates for former welfare recipients entering the labor force in 1996 and 1997 have been estimated as 35 percent and 33 percent, respectively.Given these data, we believe that low-wage workers continued to experience higher than average unemployment rates in the last 5 years of the decade. Many factors may explain the relatively lower rate of UI receipt among low- wage workers. These factors could include the possibility that low-wage workers are more likely to quit work to look for another (perhaps better- paying) job or to be fired for cause than other workers. Both of these circumstances would generally make claimants ineligible for UI benefits. However, certain major factors commonly cited by experts as contributing to the general decline in use of the UI program—fewer workers in manufacturing jobs or with union membership, and tighter state eligibility requirements—have particular significance for low-wage workers. As a group, low-wage workers are much less likely than other workers either to be employed in manufacturing or to be union members. They are also less likely to be employed in other industries such as construction and mining that, like the manufacturing industry, tend to use layoffs to terminate employees. Rather, they are likely to work in retail trade or services, industries that historically have handled job separations differently (generally, there are fewer employee layoffs) and had less union membership than industries such as manufacturing. In 1997, about 70 percent of low-wage workers were employed in retail trade and services, while 18 percent worked in manufacturing, mining, or construction. Certain state eligibility criteria are particularly challenging to low-wage workers, especially to those who have not held jobs for steady periods of time, such as many former welfare recipients. Unemployed people with economic and financial characteristics commonly associated with former welfare recipients—single parents with dependent children who most often have an intermittent work history of low-wage (and frequently part-time) work—can be particularly vulnerable to these state requirements. These state criteria include requirements for minimum amounts of earnings as well as disqualification for benefits if workers leave jobs because of personal financial circumstances. In addition, the time allotted in many states for processing wage records may require that a claimant wait between 3 and 6 months before receiving benefits to which he or she is entitled. Initially, to apply for UI benefits an unemployed person must have had “substantial attachment to the labor force” in prior work. Most statesuse previous earnings—recorded on a quarterly basis in state wage records—to measure whether a claimant has had sufficient employment history. For the most part, states require that a claimant have earned a certain minimum amount over a specified four calendar quarters (the “base period”). The minimum amount for the base period ranges from $130 in Hawaii to $3,400 in Florida. As a practice, the use of earnings to measure employment history treats low-wage workers differently from higher-paid workers, even if their participation in the workforce is similar. For example, a worker in Florida earning the minimum wage of $5.15 per hour must work 660 hours to qualify for UI, while a worker earning $10.00 per hour would need to work a little over one-half as long to qualify for benefits. Although the current state earnings requirements appear fairly minimal (a full-time worker earning minimum wage for 40 hours per week would need to work 16.5 weeks to qualify for UI in Florida), they can have a negative impact on workers with a less stable job history. In table 7, we compare the effect of state earnings and employment requirements on two unemployed part-time workers who both lost their jobs in 2000—one earned minimum wage and the other earned $10.00 per hour. The comparison demonstrates that a part-time, low-wage worker is less likely to qualify for UI benefits. In fact, in eight states, working 20 hours a week for 6 months at the minimum wage would be insufficient to qualify an unemployed worker for benefits. Next, to be eligible for UI benefits in most states, a person must have become unemployed involuntarily—that is, the person was either laid off or quit a job for “good cause.” Generally, if a person leaves a job for reasons other than good cause, he or she is disqualified from UI benefits. However, much variation exists among the states about the factual circumstances that may constitute “good cause.” Even though many states have laws that restrict good cause to work-related circumstances, administrative decisions and specific statutory exceptions lead to different interpretations of “work-related circumstances.” Certain temporary family crises—such as the sudden loss of child care or the serious illness of a dependent child—may cause workers in marginal financial circumstances to quit their jobs. We surveyed the UI directors of the 50 statesabout three hypothetical situations involving retail workers who quit their jobs for compelling personal reasons. In all cases, it was assumed that the workers were otherwise eligible for UI and that they were able to work when they applied for benefits. Table 8 shows that most states would deny benefits to those currently available for work who had to quit their jobs because child care was temporarily unavailable. However, if a worker originally hired to work a day shift was suddenly required to work a night shift and had to quit the job because child care was not available, only eight states would deny UI benefits. If an employee had quit to take care of a seriously ill child, about half of the states would deny benefits. In general, under state laws the unemployed person must also be available and able to work and, in most states, actively seeking work. Again, states have different definitions as to who is currently available and seeking work, often requiring that a claimant search for full-time work. In addition, some states require that the claimant be available to take a job for any shift that might be offered. Because many former welfare recipients work part- time and may be limited in the hours they work because of lack of child care and limited access to transportation, we surveyed the states on their requirements related to these issues. Table 9 shows that three-fifths of the states would not allow benefits to be paid to an unemployed part-time worker continuing to look only for part- time work, even though the worker is otherwise eligible for UI. However, if a person looking for work in retail trade could not work during a night shift because child care or transportation was not available, most states would continue to pay UI benefits. Finally, even if the unemployed worker is eligible to receive benefits, the time it takes to process wage records may cause serious delays before the worker can collect UI benefits. In most states, a claimant for UI must have worked in two calendar quarters and have state wage records that show earnings in each of the quarters. However, the time it takes to add quarterly employee wage information to the state wage records generally means that the complete wage records will not be available until the next quarter after the information is received. Two factors cause delays in processing state wage records, which are compiled from quarterly employee wage reports. First, the wage report is not due to most states until a month after the end of the quarter in which the wages are earned. For example, the wage report for the last calendar quarter of the year (ending on December 31) is due to the state January 31. Second, after the state receives the wage report, it needs time to process it. While many states require that employers with more than 250 employees file wage reports on magnetic media, smaller companies often file on paper documents, which may take 3 to 6 weeks longer to process. Therefore, although some wage data may be available after the first month of the next quarter (February 1 in the example), all wage data may not be available until the beginning of the next quarter (April 1). To allow for these processing delays, most states specify that wages that count for UI must have been earned within the first four quarters of the last five completed quarters. These four quarters are called the “standard base period.” In many states unemployed workers whose only work was in the most recent 6 months may have to wait between 3 and 6 more months to have their earnings counted toward UI eligibility. For example, if a worker starts a job in a retail store in October but gets laid off February 1, 39 states would not apply the worker’s total earnings toward UI eligibility until after July 1. Currently, only 11 states will count the worker’s earnings immediately toward UI eligibility. If a worker does not have sufficient earnings in the standard base period, most of these states will allow what is known as an “alternative base period” and count the earnings in the last four completed quarters (so that the worker’s January earnings would be counted in the second calendar quarter starting in April). In these states, if the wage records have not yet been processed, state officials most commonly make a “wage request” of an employer to verify a claimant’s most recent earnings. Since welfare reform in 1996, the welfare rolls have dropped and large numbers of people have joined the labor force, many in low-wage jobs. Yet, most states have made little change to their UI benefit coverage provisions that would assist low-wage workers. Specifically, states have made few alterations to eligibility criteria, such as minimum earnings requirements, and other practices that in their current form may make it more difficult for low-wage workers to qualify for UI. Recently, however, a group representing the Department of Labor, state UI directors, and others has offered proposals to expand benefit coverage for UI claimants that address some of the issues related to low-wage workers. For the low-wage worker with an unstable job history, little has changed in state laws in recent years to increase the likelihood of UI coverage.In fact, in some states UI benefits for such workers became less accessible. For example, a former welfare recipient started her first job October 1 as a retail clerk paid at $5.15 per hour. After working 26 weeks for 20 hours each week, she was laid off because of slow sales. During that period, she earned $2,678 and worked 520 hours. In 1996, she would have been ineligible for benefits in five states—Indiana, Maine, New Hampshire, North Dakota, and Virginia—because these states require a claimant to have earned more than this worker’s total wages, and also in Washington because she had not worked a sufficient number of hours. In 2000, she would be ineligible in eight states—those listed above plus Florida and North Carolina—because these states raised their minimum earnings requirements. During the period 1996 through 2000, 19 states increased the total earnings required for UI eligibility, 1 state lowered its requirement, and the remaining 29 kept the same minimum earnings level. If the worker in the previous example resided in a state where she was eligible for UI, the benefits available to her would most likely be about the same in 2000 as in 1996. In 12 states, she would receive additional benefits if she had dependents. The states vary as to both weekly benefit amounts and how long a claimant may receive the weekly amount. Table 10 illustrates the benefit coverage of UI if this worker filed in 2000 in the four most populous states. Since 1996, there also has been very little movement among the states to adjust for the time lag in reporting wages if it affects when an unemployed low-wage worker can be eligible for benefits. Thus, even if the unemployed worker in our example was eligible to receive benefits, the time it would take to have her wages count would most likely cause delay before she could apply for, and collect, UI benefits. In 1996, nine states had provisionsto allow recent wages to count, even if the wages were earned outside the normal base period, if a claimant needed the earnings to qualify for UI. In 2000, two more states (North Carolina and Wisconsin) had similar provisions.Among the remaining states, however, our survey of UI state directors indicated that it is unlikely much change will occur in the near future. Of the 39 states without provisions to count recent earnings, only one state director said that his state (Alaska) was likely to adopt such a provision, and state directors from 29 states said that their states were either very unlikely or unlikely to adopt this change. In the past 5 years, the Advisory Council on Unemployment Compensation and a stakeholder workgroup that includes state UI directors, union representatives, business representatives, and Department of Labor officials have made proposals that would expand the availability of the UI program for low-wage workers, among other reforms. According to Labor, the changing U.S. economy and its labor force have led to the current movement for reform. The UI program was designed over 60 years ago and worked well for a certain type of worker within the U.S. economy at that time. Since then, the U.S. economy and the composition of its labor force have changed, while the UI program has been slow to adapt to these changes. Labor noted that this has resulted in a larger portion of the labor force more closely resembling a category of worker that UI was not designed to assist. More recently, the reform of the welfare system has further increased the number of workers in this category. In 1995, the advisory council made a series of proposals regarding low- wage workers as part of a larger set of recommendations about the needs of today’s labor market. Subsequently, the Department of Labor organized a dialogue with state, employer, and union representatives to continue the debate on possible UI reform. As a result of this dialogue, a stakeholder workgroup of federal, state, and private sector officials recently proposed reforms for the UI program. Reform proposals applicable to low-wage workers from these two groups include the following: ShortenthelagtimeinqualifyingearningsforUIeligibility. The advisory council recommended that all states use a “moveable” base period to consider earnings necessary to qualify a claimant for benefits. Under this proposal, the minimum earnings requirement could be met by earnings from the last four completed quarters, rather than from the first four of the last five completed quarters. Although initially the stakeholder workgroup considered a proposal to provide incentive funding to the states for “alternative” base periods similar to the advisory council’s moveable base period, ultimately it suggested that states try to use technology advances to process the UI reports faster and, where at all possible, to use the latest wage earnings available for all claimants. SetminimumstandardsforUIearningsrequirements. The advisory council recommended that all states set their laws so that required base period earnings do not exceed 800 times the state’s minimum wage. In its dialogue, the Department of Labor asked for comments on a proposal that would set the minimum earnings requirements to 400 times the minimum wage (this figure was selected so that someone who had worked for 20 weeks for 20 hours at minimum wage would be eligible for benefits in every state). However, the final proposals from the stakeholder workgroup did not include any recommendation on this issue. Donotdisqualifyclaimantsseekingpart-timework. Both the advisory council and the stakeholder workgroup proposed that states should not reject claimants simply because they are looking for part-time, rather than full-time, work. Donotdisqualifyclaimantswhoquitajobtocareforadependent. Although neither group ultimately recommended this proposal, the Department of Labor originally offered it for comment. The proposal would have provided financial incentives to states to pay UI benefits to claimants who had to quit their jobs to care temporarily for a child or other family member. State objections to these proposals focus on the expansion of benefits, and the states argue that (1) the costs of the proposals are burdensome and (2) the proposals violate the traditional roles of the federal and state governments in the operation of the UI program. Regarding the first issue, the states point to, for example, the extra costs of obtaining the most recent earnings records for UI claimants. In response to proposed state legislation, Texas estimated the extra administrative costs at $153,000 annually for making special requests to employers for recent wage information. The proposal from the stakeholder workgroup would eliminate the requirement that states make these special requests, instead calling for federal funding of improved technology to accelerate state processing of UI wage records. However, the largest cost cited by states relates to the increased number of claimants receiving UI benefits. If alternative base provisions were implemented, Texas estimated that the annual costs to the unemployment insurance trust fund would be $24 million per year in benefits paid to potential claimants; California officials estimated their costs at $33 million per year. From the standpoint of the states, the second objection—changes to the traditional roles of the state and federal governments—raises more difficult problems. While the federal government has imposed some specific requirements, these requirements are viewed as minor conditions only; for example, UI claimants cannot be denied benefits if they refuse work as a union strikebreaker. In contrast, the proposals discussed here—for example, the earnings requirements or allowable reasons for quitting a job—pertain to issues that state officials consider integral to the operation of the state’s UI program that, until now, have been generally under the control of the state. Despite interest in ensuring that the UI program is meeting the needs of low-wage workers, little action has been taken at the state or federal levels to expand UI availability to this group. In part, this reflects the difficulty of addressing the cost implications of expanded eligibility and balancing states’ autonomy in operating their UI programs. Yet, as a safety net, the UI program continues to offer only minimal protection for low-wage workers. Even though employers in many states pay the same UI payroll taxes for employees earning minimum wage as they pay for employees earning far more than that amount, low-wage workers are much less likely than higher- wage workers to be included in the UI safety net. In the event of an economic downturn, many low-wage workers may find that, unlike higher- wage workers, they will be unable to qualify for UI benefits. While the situation deserves attention on its own merits, the sweeping changes in national welfare policy heighten its importance. A UI program that supports all workers who lose their jobs through no fault of their own during times of economic hardship can play an important role in helping many former welfare recipients maintain their places in the labor force and out of the welfare system. In its review of a draft of this report, the Department of Labor generally agreed with our findings and conclusion. It made three major comments: (1) that the changing U.S. economy and its labor force have led to the current movement for UI reform; (2) that nonmonetary eligibility criteria such as voluntarily quitting a job may explain some of the differences between the UI rate of receipt for low-wage and other workers; and (3) that the increases in the national minimum wage between 1996 and 2000 may have made some unemployed low-wage workers eligible for UI. We concur with these comments and have modified our report as appropriate. Labor also made technical comments, which we have included in our report where appropriate. (Labor’s comments appear in app. III.) As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Honorable Alexis M. Herman, Secretary of Labor; the Honorable Donna E. Shalala, Secretary of Health and Human Services; appropriate congressional committees; and other interested parties. We will also make copies available upon request. Please call me at (202) 512-7215 or Gale Harris at (202) 512-7235 if you or your staffs have any questions about this report. Other GAO contacts and staff acknowledgments are listed in appendix IV. We used a variety of data sources to examine the role of unemployment insurance (UI) as part of the safety net for low-wage workers. To show the general trends of UI participation among all unemployed, we summarized data compiled by the Department of Labor. To measure the use of UI by low-wage workers as opposed to other workers, we used data from the Survey of Income and Program Participation (SIPP), a survey conducted by the Bureau of the Census. To determine the specific eligibility criteria used currently in state UI programs, we surveyed the directors of these programs. Finally, to assess whether states have changed their policies and practices to better ensure that low-wage workers are included in the UI safety net, we reviewed data from the Department of Labor as well as data from a national survey of UI directors, and we visited the four most populous states to talk with state officials about their UI system. We performed our work between January 2000 and September 2000 in accordance with generally accepted government auditing standards. To compare low-wage workers’ experience with UI with that of other workers, we estimated the unemployment rates and the UI rates of receipt for the two groups of workers. To do this, we needed information on (1) the employment status of individuals; (2) specific characteristics of the employed population; (3) specific characteristics of the unemployed population; and (4) detailed information on unemployed people who collected UI. We talked with experts at the Department of Labor and the Bureau of the Census and reviewed academic research and other related literature to determine what data sources could be used for our study. We considered four data sources with information on the use of the UI program nationwide—SIPP, the Current Population Survey (CPS), the Benefit Accuracy Measurement program (BAM), and general Department of Labor UI administrative data. SIPP is a longitudinal survey that collects information on labor force participation and income sources over a 3-year period. CPS, a national survey conducted by the Bureau of the Census for the Bureau of Labor Statistics, is a longitudinal survey that collects data on employment status and other demographic characteristics over a 1-year period. BAM is a Department of Labor program that collects information in order to evaluate the accuracy of state UI payments, and it includes specific data on demographic characteristics of people who collect UI benefits. Labor also maintains other administrative databases that collect information related to unemployment and the UI program. Figure 6 compares various data elements that are available among these four sources. From our review, we determined that SIPP was the only data source that would allow us to estimate what portions of the unemployed population were low-wage and higher-wage and the extent to which each group received UI. SIPP is a survey administered in person to participants every 4 months over a 3-year period. During the 3-year period, the same set of questions is asked of the same individuals, allowing for analysis of an individual’s labor force experience over the entire time. The respondents who are surveyed for this period are referred to in total as a “panel.” For example, participants in the panel included in the 1993 SIPP first reported data beginning in October 1992, and they continued to report data at 4- month intervals through December 1995. Our data analysis required that we use SIPP data that covered an entire 3- year period. At the time we conducted our research, the only completed SIPP panels with data from the 1990s were those started in fiscal years 1990, 1991, 1992, and 1993. The latest data available from these panels were for December 1995 from the 1993 SIPP panel. As a result, our research using SIPP data was limited to the period January 1990 through March 1995. To estimate the unemployment rates and the UI rates of receipt for low- wage and higher-wage workers using the 1990 through 1993 SIPP panels, we took the following steps: Step 1: We created a sample from each SIPP panel of 18- to 64-year- olds who were not self-employed. We limited our sample in each SIPP panel to those between the ages of 18 and 64. We also excluded those who were self-employed and for whom there were incomplete data during the 3-year period. Our sample included data on the wages and salaries of respondents as well as the number of hours and weeks worked. Step 2: We used March of the last year of each SIPP panel to determine employment status. Because of the design of our analysis, we chose to focus on the employment status in one month, March, from the last year of each panel.Because seasonal employment can greatly affect employment status at certain times of the year (such as summer and winter), we examined data from March, a month less likely to be affected by seasonal employment. SIPP records data on a monthly basis. Since it is possible for an individual to be both employed and unemployed in the same month, to address this issue, we consulted with officials at the Bureau of the Census.We considered a person as employed if he or she had a job for the entire month or, if the person missed work during this period, it was not because he or she was laid off and he or she spent no time looking for a job.We considered a person as unemployed if he or she was out of work for the entire month or, if the person did work for part of the month, he or she spent the rest of the time laid off or looking for another job.We did not include those who were out of the labor force. Step 3: We “looked back” 27 months for the most recent job. To determine the wage level of an unemployed person, we identified the most recent job held by that person. To do this, we reviewed the work history to determine whether the person had had a job during the period covered by the SIPP panel. Starting in March of the last panel year (the month we used to determine unemployment), we looked back on a month- by-month basis to determine the most recent month in which that person was identified as employed. By using March of the last panel year we could, where necessary, look back for a period of 27 months to identify prior employment. In some cases, a respondent did not have a job during the entire 27-month period. If a respondent had not held a job at all during this time, we excluded the person from our sample. Step 4: We divided our sample into low-wage and higher-wage workers. We divided both the employed and unemployed populations into either low-wage or higher-wage workers on the basis of data from the current job if the person was employed or from the person’s most recent job if unemployed. We defined low-wage as earning $8 per hour or less, based on 1999 dollars. For our analysis, we adjusted the rate for inflation. We determined the $8 level by dividing the annual income for a family of four at the federal poverty level by 2,080 hours (full-year, full-time employment).We determined each person’s hourly earnings in one of two ways: (1) if the data included an hourly wage, we used the reported hourly wage in the most recent month that the person was employed or (2) if there was no reported hourly wage, we constructed an hourly wage using data from the most recent month that the person was employed (reported monthly salary divided by the number of weeks worked in the month multiplied by the number of hours usually worked during the week). In some cases, a respondent who had had a job could not be classified as either low-wage or higher-wage because of missing wage or salary data. Table 11 shows what percentage of our SIPP sample had missing wage or salary data. Step 5: We determined whether the unemployed worker reported UI as a source of income. We then identified whether each respondent had received UI benefits while out of work. If an unemployed former worker reported UI as a source of income in March of the last panel year, we classified the person as receiving UI. Otherwise, we classified the person as not receiving UI. Step 6: We identified additional characteristics of the unemployed population. We further analyzed the unemployed population by identifying the industries in which they had worked, whether they had worked full-time, and how long they had worked before becoming unemployed. To identify the kinds of industries low-wage and higher-wage unemployed workers had worked, we used the industry code of the most recent job. Our work presents data for nine industry groups: (1) retail trade; (2) manufacturing; (3) finance; (4) mining and construction; (5) agriculture, fishing, and forestry; (6) wholesale trade; (7) transportation and utilities; (8) public administration; and (9) services. We developed these groups by combining detailed CPS industry codes. For example, the services sector combines five types of service industries: business services, personal and entertainment services, medical services, education and social services, and professional services. Next, we identified whether the unemployed former worker had been employed full-time or part-time. We defined full-time employment as working 35 or more hours each week and part-time employment as working fewer than 35 hours per week. To determine the number of weeks worked by the person in the year immediately before he or she became unemployed, we created a subsample from our original SIPP sample. The subsample included only those who were unemployed in March of the last panel year and who had had a job during the 15 months prior to unemployment (as compared with the 27- month look-back period for our original sample). After identifying the most recent job in the 15-month period, we counted how many weeks each person had worked during the 12 months before becoming unemployed. We classified the unemployed low-wage and higher-wage workers into three categories: (1) those who had worked more than 35 weeks during the year, (2) those who had worked between 20 and 35 weeks during the year, and (3) those who had worked fewer than 20 weeks during the year. To increase the size of this subsample, we combined all four SIPP panels. We calculated the overall unemployment rate by dividing the number of unemployed by the total number of people in the labor force, as follows: The unemployment rates for low-wage and higher-wage workers were calculated on the basis of their representation in the labor force. For example, the unemployment rate for low-wage workers was calculated by dividing the number of unemployed low-wage workers by the number of low-wage workers in the labor force. In calculating an unemployment rate for higher-wage workers, we included the cases with missing wage or salary data in such a way as to calculate the most conservative estimates, that is, estimates that would minimize any differences between the low-wage and higher-wage unemployment rates. To calculate this rate, we assumed that all those with missing wage data were actually higher-wage unemployed workers. With that assumption, the group with missing wage data was added to the higher-wage unemployed group when we calculated the higher-wage unemployment rate. The unemployment rate we calculated using the SIPP data differs for a variety of reasons from the standard unemployment rate published by the Bureau of Labor Statistics from CPS data. These two rates differ in part because each rate measures different populations. Specifically, our rate includes only those in the labor force who are between the ages of 18 and 64, whereas the standard rate includes all those who are 16 years old and older. Also, our analysis excludes those who are self-employed, while the standard rate includes self-employed workers. Another key contrast results from technical differences between the two databases used to calculate these rates. SIPP, for example, records data for each month, but the CPS records data for a 1-week period. Therefore, while employment status in SIPP measures whether or not a person was employed during the entire month, the CPS measures whether the person was employed during the week that included the 12th of the month. We calculated the overall UI rate of receipt by dividing the number of unemployed workers who had collected UI benefits during March by the total number of unemployed workers in that month. In calculating the UI rates of receipt, we included the respondents with missing wage data in such a way as to present the most conservative estimates; that is, we allocated the missing wage data so that our estimates would minimize the difference between the receipt rates for the two groups. Therefore, to calculate the UI rate of receipt for low-wage workers, we assumed that all workers with missing wage data who were paid UI benefits were low-wage unemployed workers. Conversely, to calculate the higher-wage UI rate of receipt, we assumed that all those with missing wage data who were not paid UI benefits were higher-wage unemployed workers. The UI rate of receipt we constructed using SIPP data is not comparable to the Department of Labor’s standard UI recipiency rate. Our UI rate of receipt measures the number of people who have actually received a UI check as a percentage of the unemployed in the labor force. Labor’s standard recipiency rate, on the other hand, measures the number of people who file a claim for UI as a percentage of the unemployed in the labor force.By measuring the number of people who file a claim for benefits, Labor’s rate includes those who eventually receive UI as well as those who do not. Because the UI program differs from state to state, we analyzed whether there were differences in the UI rates of receipt for the low-wage and higher-wage unemployed that lived in different states. For purposes of comparison, we created two groups of states—those with high standard recipiency rates and those with low standard recipiency rates based on Labor’s standard recipiency rates from 1992 through 1995. States that consistently were in the 15 states with the highest standard UI recipiency rates were in one group and states that were consistently the lowest 15 were in the other. To increase the size of the sample for each of the two groups of states, we combined all four SIPP panels. For each unemployment rate and UI rate of receipt presented in the analysis, we tested whether the differences between low-wage and higher- wage workers were statistically significant. To do this, we compared the sampling errors for the two estimates and, if the sampling errors for the low-wage and higher-wage workers did not overlap, we concluded that each difference is statistically significant. Unless otherwise noted, statistical significance was tested at the .01 level, thereby allowing us to conclude that there is only a 1 percent chance that the difference between the rates is due to sampling error. Although SIPP data are unavailable for the latter half of the 1990s, other relevant data are available from the CPS and from the Department of Labor’s BAM database. From the CPS data, it is possible to calculate the proportion of low-wage (or higher-wage) workers in the employed labor force. From the BAM data we can determine the proportion of low-wage (or higher-wage) workers who received UI, compared with the total number of unemployed workers receiving UI. Table 12 compares these data with our calculations, using the SIPP data for the years 1992 through 1995. As shown in column A of table 12, the CPS percentage of low-wage workers in the employed workforce was roughly 30 percent for the entire period. In column C, the BAM percentage of those paid UI who were low-wage workers was also roughly 30 percent during the same time. Although the table refers only to low-wage workers, the percentages for higher-wage workers can be computed as the inverse of the low-wage percentage (that is, 100 minus the low-wage percentage). Thus, the CPS percentage for higher-wage workers is roughly 70 percent, and this percentage is about the same as the BAM percentage for higher-wage workers—also about 70 percent. Figure 7 demonstrates that the data from the CPS and BAM can be used to infer how the rate of UI receipt and the unemployment rate relate across wage groups between 1996 and 1999. It can be shown that, given the observed pattern of data from CPS and BAM, it is unlikely that the rate of UI receipt for low-wage workers would exceed that of higher-wage workers during this period. Essentially, for this to have occurred, the unemployment rates across the wage groups would have to have converged dramatically during this period—in sharp contrast to the experience of earlier years. Whereas between 1992 and 1995 the unemployment rate for low-wage workers exceeded that of higher-wage workers by well over 100 percent, as long as the unemployment rate for low-wage workers exceeded that of higher-wage workers by as little as 18 percent throughout the rest of the decade, the rate of receipt for low-wage workers remained less than that of higher-wage workers for each year between 1996 through 1999. In figure 7, RR (lw) equals the UI rate of receipt for low-wage workers and UR (lw) equals the unemployment rate of low-wage workers. For higher- wage workers, the UI rate of receipt is RR (hw) and the unemployment rate is UR (hw). Considering the highest value W had taken from 1996 through 1999 was 1.18 in 1998, RR(lw) is less than RR (hw) throughout the decade as long as UR (lw) is greater than UR(hw) by at least 18 percent. To ascertain how individual states would treat specific circumstances that might be particularly applicable to low-wage workers, and to former welfare recipients, we sent a questionnaire to state UI directors. For the most part, the questionnaire presented hypothetical situations related to unemployed workers, and asked the UI directors to determine whether the unemployed worker could qualify for UI benefits in their state program. We received responses from all 50 states. To assess whether state UI programs had changed in the 4-year period since welfare reforms, we reviewed the Department of Labor’s compilation of legislative changes in state UI laws from all 50 states. To examine the current status of the state UI programs, we included questions about the operation of UI programs in our survey of the state UI directors. We also visited four states—California, Texas, New York, and Florida—to talk with officials about the state UI program. We chose these states not only because they were the most populous states but also because they presented contrasts in how they manage their UI programs. In June 2000, we sent a questionnaire to the UI directors for the 50 states. In this appendix, we present the questions that we discuss in our report and data on how each state responded. Question 2: Does your state have an alternative base period? Question 4: (If the state had no alternative base period) In your opinion, how likely or unlikely is it that your state will adopt an alternative base period in the near future? Questions 5, 6, and 7 dealt with personal circumstances for quitting a job. For each question, the respondent was asked to assume that (a) the worker has quit a job as a stockroom clerk with a retail chain store; (b) the worker’s reasons for leaving the job are compelling and not the fault of the worker; (c) the worker is otherwise eligible to receive UI benefits. Question 5: The worker quits because child care suddenly becomes unavailable and the employer cannot reschedule the worker’s hours. Subsequently, child care becomes available. The worker files for UI. Question 6: The worker was originally hired to work during the day shift. However, the employer requires that the worker change work hours to a night shift, and the worker quits because child care is unavailable during the night shift. Question 7: The worker quits to care for a sick child (physician certifies need). Subsequently, the worker is available to resume work. The worker files for UI. Questions 8, 9, and 10 dealt with the requirement that a worker be “able and available” for work. For each question, it is assumed that the worker’s previous job was a stockroom clerk in a retail chain store, that the worker was laid off this job, and that the worker is otherwise eligible to receive UI. Question 8: In her prior job, the worker was employed part-time, for 30 hours a week. When she applies for UI and is asked whether she is available for work, she indicates that she is looking for work with the same hours as those of her previous job and is not able to work more hours than previously. Question 9: The worker is available to work full-time during the weekdays. However, she cannot work evenings or weekends because of the lack of affordable child care at that time. As a result, the worker is unable to take jobs that require that she work evenings or weekends. Question 10: The worker is available to work full-time but always relies on public transportation to get to work. The worker is therefore unable to take jobs that require work during a night shift because public transportation is generally not available during night hours. In addition, Michelle C. Verbrugge, Richard Kelley, Grant M. Mallie, Joan K. Vogel, and Andrew M. Davenport made key contributions to this report. WelfareReform:States’ImplementationandEffectsontheWorkforce DevelopmentSystem(GAO/T-HEHS-99-190, Sept. 9, 1999). WelfareReform:States’ImplementationProgressandInformationon FormerRecipients(GAO/T-HEHS-99-116, May 27, 1999). WelfareReform:InformationonFormerRecipients’Status(GAO/HEHS-99- 48, Apr. 28, 1999). WelfareReform:States’ExperiencesinProvidingEmploymentAssistance toTANFClients(GAO/HEHS-99-22, Feb. 26, 1999). WelfareReform:StatesAreRestructuringProgramstoReduceWelfare Dependence(GAO/HEHS-98-109, June 17, 1998). UnemploymentInsurance:Program’sAbilitytoMeetObjectives Jeopardized(GAO/HRD-93-107, Sept. 28, 1993). The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Ordersbymail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Ordersbyvisiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Ordersbyphone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
The welfare and unemployment insurance (UI) programs have been part of the nation's social safety net since 1935. The welfare program provides cash assistance to needy families without means of support, while UI provides cash assistance to people temporarily unemployed. In 1996, welfare reform put time limits on how long most people can receive cash assistance and generally required recipients to engage in work activities to qualify for income support. Since then, the welfare rolls have dropped dramatically as large numbers of welfare recipients have started working, many in low-income jobs. With this shift, the UI program has become a more significant part of the social security net. GAO examined the use of the UI program by low-wage and unemployed workers. GAO found that low-wage workers are less likely to receive UI benefits than are other unemployed workers even though they are twice as likely to be unemployed. Low-wage workers are less likely to receive UI benefits because of (1) their tendency to quit work voluntarily, (2) restrictive state eligibility requirements, and (3) their lack of union memberships. Several UI reform proposals to expand the availability of UI benefits to these workers are being discussed by the Advisory Council on Unemployment Compensation.
You are an expert at summarizing long articles. Proceed to summarize the following text: VA comprises three major components: the Veterans Benefits Administration (VBA), the Veterans Health Administration (VHA), and the National Cemetery System (NCS). VA’s mission is “to administer the laws providing benefits and other services to veterans and dependents. . . .” The department’s vision is to be a more customer-focused organization, functioning as “One VA.” This vision stemmed from the recognition that veterans think of VA as a single entity, but often encounter a confusing, bureaucratic maze of uncoordinated programs that put them through repetitive and frustrating administrative procedures and delays. The “One VA” vision is to create versatile new ways for veterans to obtain services and information by streamlining interactions with its customers and integrating information technology resources to enable VA employees to help customers more quickly and effectively. This will require modifying or replacing separate information systems with integrated systems using common standards to share information across VA programs and with external partner organizations, such as the Department of Defense. Information technology accounted for approximately $1 billion of VA’s fiscal year 1999 budget request of $43 billion. Of the $1 billion, about $847 million, $146 million, and $5 million were for VHA, VBA, and NCS, respectively. Over the past several years, we have identified weaknesses in VA’s efforts to modernize its operations and manage its information technology resources. As we reported in 1992, VBA’s procurement of hardware was not supported by a defined information architecture, thereby increasing the risk of developing systems that would not work as intended. In June 1996, we testified that VBA needed to develop a much improved investment strategy for selecting and managing information technology projects in a more disciplined, businesslike manner. In January 1998, we reported that while VA made significant progress in preparing a strategic plan, dated September 30, 1997, the plan needed improvement in four major areas: (1) development of results-oriented goals, (2) descriptions of how the goals are to be achieved, (3) discussion of external factors, and (4) discussion of coordination efforts with other agencies. Finally, in October 1997, we testified on the importance of having strong CIOs at major federal agencies, such as VA, to bring about much-needed reforms in the government’s management of information technology. In addition, a panel of the National Academy of Public Administration reported in August 1997, that VBA lacks strategic planning and management capabilities that are necessary for leadership to define where the organization wants to be, enable development of specific operational plans for getting there, and provide a set of coordinating and integrating capacities for implementing planned initiatives. Recognizing the need to better manage information technology, recent legislative reforms—the Clinger-Cohen Act of 1996, the Paperwork Reduction Act of 1995, and the Federal Acquisition Streamlining Act of 1994—provide guidance to federal agencies on how to plan, manage, and acquire information technology as part of their overall information resources management responsibilities. These legislative reforms highlight the need for business process reengineering, integrated architectures, investment processes, and CIOs to help with major information resource management responsibilities. In assessing VA’s implementation of the Clinger-Cohen Act and other legislative reforms, we reviewed and analyzed numerous documents pertaining to VA’s business process reengineering, integrated information technology architecture, information technology investment decision-making process, and appointment of an agency CIO. These documents include VA’s draft entitled One VA: Vision of Information Technology Enhanced Customer Service, dated January 22, 1998; OMB’s Memorandum on Information Technology Architectures, dated June 18, 1997; VA’s draft FY 1999 Department Capital Plan, dated October 1997; and VA’s April 1997 Progress Report on the Department of Veterans Affairs CIO Program. We discussed VA’s implementation of the Clinger-Cohen Act and other legislative reforms with Office of Management and Budget (OMB) officials and with various VA headquarters and component officials, including the Offices of the CIO, Information Resources Management, and Policy and Planning. We also interviewed a VA representative of the contractor responsible for developing VA’s information technology vision document. We used our investment guide to evaluate and assess VA’s information technology investment process. We performed our work from October 1997 through April 1998, in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Veterans Affairs and they are reprinted in appendix I. More details of our objectives, scope, and methodology are included as appendix II. The Clinger-Cohen Act requires agency heads to analyze the missions of the agency and, on the basis of this analysis, revise and improve the agency’s mission-related and administrative processes before making significant investments in supporting information technology. Specifically, agencies should maximize the potential of technology to improve performance, rather than simply automating inefficient processes. According to our business process reengineering guide, an agency should have an overall business process improvement strategy that provides a means to coordinate and integrate the various reengineering and improvement projects, set priorities, and make appropriate budget decisions. VA has not analyzed its business processes in terms of implementing its “One VA” vision. In addition, it does not have a departmentwide business process improvement strategy specifying what reengineering and improvement projects are needed, how they are related, and how they are prioritized. VA’s Directive 6000 instructs administration heads, assistant secretaries, and other key officials to apply sound business process improvement or business reengineering methods to enhance the benefits of information technology, but the directive does not provide the guidance needed to accomplish this departmentwide effort. Specifically, VA’s strategy does not identify needed reengineering and improvement projects, describe how they are interrelated, determine the order in which they will be pursued, and define specific goals, time frames, resource requirements, and key participants for each. In the absence of a departmentwide strategy, VA components are proceeding with separate, uncoordinated efforts, which undermines the department’s “One VA” vision. For example, both VBA and VHA are building information centers which enable callers, through the use of an 800 number, to obtain information on general benefits and basic services. However, these efforts are not currently coordinated. As a result, both projects are unnecessarily providing the same functionality. A senior VA official acknowledged that VA should not be building two separate information center systems, and that doing so is not consistent with the “One VA” vision. Similarly, VA and its components have not adequately coordinated and integrated their business process improvement efforts for the development of a master veteran record (MVR). This departmentwide project is intended to electronically link VHA, VBA, NCS, and Board of Veterans’ Appeals information systems and databases to share vital information, such as death notification, change of address, representation and family status about veterans. However, according to the project manager, this project is experiencing difficulties because VBA will not fund a segment of the project necessary to establish a link between VBA’s compensation and pension program—VBA’s largest program—and other VA components. As a result, VBA is not in a position to obtain timely information, such as death notifications, which can result in overpayments to veterans. VA has acknowledged the need to develop a strategy to achieve the “One VA” vision and has hired a contractor to analyze the department’s business plans and information technology projects to determine how well they fit within the vision. However, to date, VA has not committed to when it will have an overall business process improvement strategy to accomplish reengineering. The Clinger-Cohen Act and recent OMB guidelines require agency CIOs to implement an architecture to provide a framework for evolving or maintaining existing information technology, and for acquiring new information technology to achieve the agency’s strategic and information technology goals. Leading organizations both in the private sector and in government use systems architectures to guide mission-critical systems development and to ensure the appropriate integration of information systems through common standards. Despite the importance of doing so, VA and its components have yet to define a departmentwide integrated architecture. For example, as we reported in May 1997, VBA did not have a complete, integrated systems architecture to help guide its new systems development activities. We therefore recommended that VBA develop such an architecture, including a security architecture and performance characteristics and standards. VA concurred with our recommendation. To formulate an approach for developing an integrated architecture, VA in March 1997 established an architecture team consisting of representatives from VA’s Office of Information Resources Management, VBA and VHA. This team issued a report to the VA CIO Council in May 1997 adopting the National Institute of Standards and Technology (NIST) five-layer model for its departmentwide information technology architecture. The five layers—business processes, information flows and relationships, applications processing, data descriptions, and technology—provide a framework for defining an information technology architecture. VA can use this model to help it document the baseline architecture, identify a target architecture, and develop a migration plan showing how the department will make the necessary transition from its existing architecture to the target architecture. Despite the VA architecture team’s efforts, VA does not yet have a departmentwide target architecture and migration plan. While a baseline architecture has been established, VA has not addressed key aspects of the target architecture, such as information flows, data descriptions, and common technical standards that would apply to VBA, VHA, and NCS. According to VA’s CIO, the department has not addressed these aspects because it is waiting for the Strategic Management Steering Committeeto take a position on the proposal to develop a departmentwide target architecture and establish a program office to implement the architecture and related efforts, including business process reengineering and customer service improvements. While information technology represents $625 million or 80 percent of VA’s proposed $786 million capital investment budget for fiscal year 1999, the department lacks an effective process for selecting, controlling, and evaluating its information technology projects as investments. VA’s newly developed selection process, used for the first time in the fiscal year 1999 budget cycle, is incomplete, undisciplined, and does not satisfy the selection process requirements specified in the Clinger-Cohen Act. For example, decisionmakers did not have adequate information pertaining to project cost, benefits, risk, and performance measures to make well-informed decisions. Also, VA’s process for monitoring and controlling its investment portfolio was incomplete and provided little information to VA decisionmakers reviewing ongoing projects. Finally, VA’s process for evaluating completed projects did not include reviews to (1) determine the causes of major differences between actual and expected results in terms of cost, schedule, and performance, and (2) revise investment processes on the basis of lessons learned. As a result of these weaknesses, the department does not know whether it is making the right investments, how to control these investments effectively, or whether these investments have provided mission-related benefits in excess of their costs. The Clinger-Cohen Act requires agency heads to implement an approach for maximizing the value and assessing and managing the risks of information technology investments. It stipulates that this approach should be integrated with the agency’s budget, financial, and program management processes. According to our investment guide, an information technology investment process is an integrated approach that provides for disciplined, data-driven identification, selection, control, life-cycle management, and evaluation of information technology investments. Information from one phase is used to support activities in the other phases. When identifying information technology investments to be managed at the department level, leading organizations use criteria that include (1) high-dollar, high-risk projects, (2) cross-functional projects (two or more organizational units benefitting from the projects), and (3) common infrastructure support, such as hardware and telecommunications. Once selected, information technology projects in the investment portfolio are consistently controlled and managed through progress reviews at key milestones in a project’s life cycle. Progress reviews should include assessing deliverables, technical issues, schedule, costs, and risks. Finally, once a project has been fully implemented, a post-implementation review or evaluation should be conducted, comparing actuals against estimates in order to assess performance and identify areas where future decision-making can be improved. VA defined a new decision-making process for information technology investments and conducted a “dry run” in formulating the fiscal year 1999 budget. As depicted in figure 1, this process began with VA’s staff offices and components submitting information packages about their capital investment projects to an Information Technology Strategic Planning Working Group. This group—composed of project decisionmakers representing the VA Central Office’s Office of Information Resources Management, Office of Financial Management, VBA, VHA, Board of Veterans’ Appeals, Office of Planning and Policy, and NCS—was created to assist VA’s CIO Council in its review of prospective projects for funding. The working group used risk and return criteria, which included factors such as investment size, project longevity, technical risk, business impact on mission, and customer needs. The working group then forwarded the scored and ranked projects to VA’s CIO Council, which is responsible for ensuring that the information technology projects are well planned, completely documented, support the strategic plan and corporate goals, and are mission critical. After its review, the council forwarded the scored and ranked projects to VA’s Capital Investment Panel. This panel, which is composed of project decisionmakers representing the VA CFO, VA CIO, VHA, VBA, VA Office of Planning and Policy, and NCS, was created to assist VA’s Capital Investment Board. The panel reviewed the projects from the CIO Council and recommended to the VA Capital Investment Board that these projects be included in the department’s capital investment plan for the upcoming year. The board is composed of the Deputy Secretary, Assistant Secretary for Management, Assistant Secretary for Policy and Planning, Under Secretaries for VHA and VBA, and the Director of NCS and is responsible for making decisions on capital investment projects and ensuring that the projects conform with VA mission, goals, priorities, and strategies. Under the Clinger-Cohen Act, agencies need to compare and prioritize projects using explicit quantitative and qualitative decision criteria, such as data on hardware and software life-cycle costs, technical risks, and mission-related benefits. In conducting their selection processes, leading organizations assess and manage all information technology projects, including mission-critical or infrastructure projects, at all phases of their life cycles, in order to create a complete strategic investment portfolio and help ensure that the benefits of their investments will be realized. By continually scrutinizing and analyzing their entire information technology investment portfolio, managers can examine the costs of maintaining existing systems versus investing in new ones and, on the basis of mission priorities, reach decisions on systems’ overall contributions to organizational goals. As stated in our investment guide, good decisions require good data. To help make decisions on information technology investments, leading organizations require all projects to have complete and up-to-date project information. This information includes cost and benefit data, risk assessments, implementation plans, and initial performance measures. Further, this information allows senior executives to rigorously evaluate each project, make project comparisons across the organization, and establish project review schedules for projects selected for funding in order to monitor and track project cost, benefits, and risks. VA’s selection criteria requires that relevant reports (e.g., congressionally requested audits and studies, VA in-process and post-implementation review findings), cost-benefit analyses, risk analyses, and risk management plans, be provided to decisionmakers reviewing projects for funding. VA did not follow a disciplined process for selecting its information technology projects. Specifically, the VA Capital Investment Board was not provided sufficient data with which to make good funding decisions. In our analysis of VA’s selection process, we examined 7 of 16 projects approved by the board. These seven projects represent about $223 million or 36 percent of VA’s fiscal year 1999 information technology capital investment budget of $625 million. As shown in table 1, none of the seven projects we examined contained all the required information. Further, despite the importance of VA’s Veteran-Focused Information Technology Architecture (ITA) program to defining and achieving the “One VA” vision, none of the required information for this project was provided to the board. Nonetheless, the board decided to fund all seven projects. Further, the board did not establish a schedule for conducting project reviews, at key milestones, for each approved project. Recognizing the weaknesses in the investment selection process, the Deputy CIO, in an August 4, 1997, memorandum to VA’s CIO, recommended that this process be improved in the next budget cycle. For example, she recommended that (1) adequate documentation be provided for all information technology projects and (2) adequate time be provided for thorough reviews of the documentation prior to scoring and ranking. Seven months later, the CIO in a memorandum to VA’s administration heads, assistant secretaries, and other key officials, specified that changes would be made to the department’s capital investment process for the fiscal year 2000 budget cycle to ensure the provision of adequate documentation and adequate documentary review. In addition, the memorandum stated that projects with incomplete documentation will be returned to the originating office for the missing information. The memorandum did not address what action will be taken if the missing documentation is not provided. Leading organizations continue to manage their investments once selection has occurred, maintaining a cycle of continual control and monitoring. Senior managers review the project at specific milestones as the project moves through its life cycle and as the dollar amounts spent on the project increase. At these milestones, the executives compare the expected costs, risks, and benefits of earlier phases with the actual costs incurred, risks encountered, and performance benefits realized to date. This enables senior executives to (1) identify and focus on managing high-potential or high-risk projects, (2) reevaluate investment decisions early in a project’s life cycle if problems arise, (3) respond to changing external and internal conditions in mission priorities and budgets, and (4) learn from past successes and mistakes in order to make better decisions in the future. During the control phase, senior executives determine if projects should be functionally modified, continued, accelerated, delayed, or terminated. As executives responsible for implementing legislative reform, it is critical that senior managers stay actively involved in the process for controlling information technology projects and receive complete and up-to-date information related to the projects under review. To control and monitor its information technology projects, VA relies on periodic project status reviews and formal in-process reviews. Periodic project status reviews are conducted at the VA component level. Formal in-process reviews are conducted at the department level. According to VA’s policy, formal in-process reviews are only conducted ad hoc, such as when it becomes apparent that a project is behind schedule, over-budget, not performing as planned, or when oversight agencies raise issues. VA’s process for monitoring and managing its investment portfolio is not timely and provides little information to VA decisionmakers. First, VA does not conduct formal in-process reviews before significant dollars are expended or substantial risks are encountered. For example, VA had initially scheduled an in-process review of a VBA project to replatform and redesign a system that provides educational benefits to reservists. However, the in-process review was canceled when the project ran into problems and the project is now being reassessed. The problems associated with this project might have been avoided had VA conducted proactive, risk-based in-process reviews at all critical project milestones. Second, to the extent that periodic project status reviews and formal in-process reviews are conducted, the results of the reviews were not provided to decisionmakers reviewing projects for funding. For example, of the 15 major ongoing or maintenance projects that the VA investment board approved for funding, only one, VBA’s Replacement of the Compensation and Pension Payment System project, received a formal in-process review during fiscal year 1997. However, as shown in table 1, decisionmakers were not provided with the results of this review. Therefore, they were not in a position to effectively monitor and manage this project. Once projects have been implemented and become operational, leading organizations conduct post-implementation reviews (PIRs) to determine whether they have achieved expected benefits, such as lowered cost, reduced cycle time, increased quality, or increased speed of service delivery. Our information technology investment guide points out that each PIR should have a dual focus. First, it should provide an assessment of the implemented project, including an evaluation of customer/user satisfaction and mission/program impact in terms of achieving the estimated cost, schedule, and mission-related benefits. Second, it should provide lessons learned so that the investment decision-making processes can be improved. VA has developed a standard methodology for conducting PIRs. This methodology focuses on elements, such as: (1) customer/user satisfaction, (2) strategic impact and effectiveness, and (3) impact on organization’s internal operations including security, internal controls, standards and compliance, and maintenance. Our review identified deficiencies with VA’s process for evaluating completed projects. First, while the three PIRs VA performed during fiscal years 1996 and 1997 gathered information on customer/user satisfaction and discussed development and implementation challenges, none of them compared actuals to estimates in terms of cost, schedule, and mission-related benefits. For example, while the PIRs discuss cost savings, they do not provide information on whether the projects met, exceeded, or fell short of expectations. Second, VA did not identify lessons learned that can be used to improve VA’s investment process for selecting, controlling, and evaluating information technology initiatives. Our review of the three PIRs VA performed disclosed that the PIRs did identify some project specific improvements. For example, based on a PIR of VHA’s Integrated Funds Distribution, Control Point Activity, Accounting and Procurement system, VHA subsequently modified this system to ensure appropriate security access. However, none of the PIRs assessed the completed projects to identify improvements that could be made to VA’s information technology investment process. The Paperwork Reduction Act and the Clinger-Cohen Act direct federal agency heads to appoint CIOs to (1) promote improvements to the work processes used by the agency to carry out its programs, (2) implement an integrated agencywide systems or technology architecture, and (3) help to establish a sound investment review process to select, control, and evaluate spending for information technology. To help ensure that these responsibilities are effectively executed, the Clinger-Cohen Act also requires that the CIO’s primary responsibility be related to information management. VA’s CIO responsibilities are not limited primarily to information management. The CIO also serves the department in a variety of top management positions, including Assistant Secretary for Management, CFO, and Deputy Assistant Secretary for Budget. In an agency as decentralized as VA, its CIO is faced with many significant information management responsibilities, such as ensuring (1) that the department’s operations will not be disrupted by the Year 2000 problem, (2) that its systems developments are not handicapped by incomplete architectures, and (3) that a sound information management investment review process that provides a systematic, data-driven means of selecting, controlling, and evaluating information technology projects will be institutionalized. As we testified in October 1997, each of these responsibilities is formidable. Taken together, they certainly constitute a full-time job for any CIO. We have raised concerns in the past about agencies that have vested CIO and CFO responsibilities in one person. Agencies face challenges in improving both financial and information management. In our opinion, each management area requires full-time leadership by separate individuals with appropriate talent, skills, and experience in these two areas. The Clinger-Cohen Act calls for CIOs to have information resources management as their primary duty. We have stressed the importance of this principle in testimony and in our February 1997 high-risk report, in which we emphasized that the CIO’s duties should focus sharply on strategic information management issues and not include other major responsibilities. In a May 1997 report to OMB, VA’s Assistant Secretary for Management acknowledged that he was the department’s CIO as well as its CFO. He indicated that the VA Secretary felt that assigning multiple responsibilities to the department’s CIO would establish clear accountability for information resources management activities at VA, where financial systems represent a substantial part of the agency’s information systems portfolio. However, officials familiar with the current information management environment at VA and its components told us that VA’s CIO is unable to get involved in the normal, day-to-day business of a CIO unless a problem arises that absolutely demands his attention. Moreover, VA’s CIO told us that because he does not have a technical background in information resources management, he relies on his deputy. VA’s Deputy CIO, however, told us that since she has not been officially delegated the decision-making authority that the CIO has, she can not make important information technology decisions promptly. For example, the Deputy CIO recognized problems VBA was having with the Veterans Service Network (VETSNET). Consequently, she wrote a plan to correct the problems and briefed the CIO. Despite the problems that VETSNET has encountered and the significance of this project to VA, the CIO has not acted yet on this plan beyond presenting the plan to the VBA CIO. The Deputy CIO stated that she does not have the authority to ensure that this corrective action plan is enacted. As a result, she added, such issues, when left unaddressed, tend to evolve into different issues or problems later. The CIO recently told us that he would soon step down from his Assistant Secretary for Management/CFO/CIO positions and assume the position of VA’s Deputy Assistant Secretary for Budget. It is not known at this time whether the new Assistant Secretary will also hold the CIO and CFO positions. According to VA’s Director of Information Resource Management Policy and Standards Service, VA recently formed a working group to determine whether to separate the department’s CIO and CFO positions. The working group has submitted several options on this matter to VA’s Secretary for consideration. VA has not fully implemented critical provisions of the Clinger-Cohen Act and other information technology legislative reforms to achieve its “One VA” vision of becoming more customer-focused and delivering seamless service to veterans. It lacks a departmentwide strategy for reengineering and improving business processes. As a result, business process reengineering efforts at the component levels are uncoordinated, duplicative, and do not provide VA with opportunities to share information. Further, while VA recognizes the importance of defining a departmentwide integrated information technology architecture, it has not yet done so. Without an integrated architecture, VA will continue to develop duplicative and redundant information systems and will not accomplish its vision of “One VA.” In addition, VA has not institutionalized a disciplined investment management process. Decisionmakers continue to make investment decisions involving millions of dollars without reliable data on expected and actual costs, benefits, and risks. Moreover, VA’s process for controlling information technology projects through periodic status and in-process reviews does not adequately monitor and manage its investments so as to detect or avoid problems early. Further, VA’s process for evaluating completed projects does not modify and improve the investment process based on lessons learned. Finally, given the size of VA’s information technology budget and the many serious information management issues its CIO must face, such as ensuring that the department’s operations will not be disrupted by the Year 2000 problem, it is important that information resources management be the CIO’s primary duty. A full-time CIO would help ensure adequate coverage of information management issues. Information resources management is not the primary duty of VA’s CIO. He also serves as Assistant Secretary for Management, CFO, and Deputy Assistant Secretary for Budget. We recommend that the Secretary of Veterans Affairs direct the Assistant Secretary for Policy and Planning to develop a departmentwide strategy that details how VA will reengineer its business processes, including identifying and prioritizing process improvement projects, and delineating their interrelationships. To fulfill the requirements of the Clinger-Cohen Act and other information technology legislative reforms, we also recommend that the Secretary direct VA’s CIO to develop a detailed implementation plan with milestones for completing an integrated, departmentwide information technology architecture; fully implement a disciplined process for selecting information technology investments in which all decisions are based upon complete and current project information including estimated project costs, expected mission-related benefits, projected schedule, and risks; conduct formal in-process reviews at key milestones in a project’s life cycle, including comparing actual and estimated project costs, benefits, schedule, and risks, and provide these results, as well as the results of periodic project status reviews performed by VA components, to decisionmakers who will determine whether to continue, accelerate, or terminate information technology projects; and initiate post-implementation reviews for information technology projects within 12 months of implementation, to compare completed project cost, schedule, performance, and mission improvement outcomes with original estimates, and provide the results of these reviews to decisionmakers so that improvements can be made to VA’s information technology investment process. In addition, we recommend that the Secretary appoint a CIO with full-time responsibilities for information resources management. In commenting on a draft of this report, the Department of Veterans Affairs concurred with all six of our recommendations. The department also stated that it recognizes that its information resources management challenges are broad and critical to the success of the department’s mission, and, therefore, established the position of Assistant Secretary to serve as CIO reporting directly to the Secretary on all information resources issues. This new Assistant Secretary will be responsible for ensuring that all of the department’s information technology initiatives support the overall “One VA” vision. Finally, in concurring with our recommendation to complete an integrated, departmentwide information technology architecture, the department did not specify how and when it plans to do so. Until it completes and implements an integrated architecture, VA will continue to develop duplicative and redundant information systems and will not accomplish its vision of “One VA.” As agreed with your offices, we will not distribute this report until 5 days after its date. At that time, we will send copies to the Chairman and Ranking Minority Member of the Subcommittee on Oversight and Investigations, House Committee on Veterans’ Affairs; and the Chairman and Ranking Minority Member of the Subcommittee on Benefits, House Committee on Veterans’ Affairs. We will also provide copies to the Chairmen and Ranking Minority Members of the Senate and House Committees on Appropriations; the Secretary of Veterans Affairs; and the Director of the Office of Management and Budget. Copies will also be made available to others upon request. Please contact me at (202) 512-6253 or by e-mail at [email protected] if you have any questions concerning this report. Major contributors to this report are listed in appendix III. Our objectives were to examine how VA has implemented the following specific provisions of the Clinger-Cohen Act and other legislative reforms: reengineering business processes before acquiring information technology, completing an integrated information technology architecture, institutionalizing a disciplined information technology investment decision-making process; and appointing an agency CIO. In examining VA’s reengineering of its business processes, we applied GAO’s guide for business process reengineering. We also reviewed VA’s draft One VA: Vision of Information Technology Enhanced Customer Service, dated January 22, 1998, and its Strategic Plan—Fiscal Years 1998-2003, dated September 30, 1997. In addition, we discussed VA business process revision activities with VA, VBA, VHA, NCS, and OMB officials. Regarding VA’s information technology architecture, we applied OMB’s Memorandum on Information Technology Architecture, dated June 18, 1997, and the National Institute of Standards and Technology’s Special Publication 500-167, “Information Management Directions: The Information Challenge.” We also reviewed agency documents and interviewed VA officials on the department’s efforts to develop an integrated information technology architecture. To assess VA’s information technology investment process, we applied applicable requirements from the Clinger-Cohen Act of 1996, the Paperwork Reduction Act of 1995, the Government Performance and Results Act of 1993, the Federal Acquisition Streamlining Act of 1994, the Chief Financial Officers Act of 1990, OMB Circular A-130, GAO’s best practices report on strategic information management, and OMB’s guide Evaluating Information Technology Investments: A Practical Guide. We reviewed and analyzed numerous documents provided by VA, including its (1) Strategic Plan—Fiscal Years 1998-2003, dated September 30, 1997, (2) Information Technology Strategic Plan—FY 1999-FY 2003, dated July 1997, (3) Office of Information Resources Management—IRM Policy and Standards Service—Information Technology Evaluation Process, dated November 4, 1997, (4) Directive 6000—VA Information Resources Management (IRM) Framework, dated September 17, 1997, (5) draft Department of Veterans Affairs FY 1999 Department Capital Plan, dated October 1997, and (6) VA’s Information Technology Strategic Planning, dated March 1997. In addition, we compared VA’s information technology investment plans and process documents with selected criteria following GAO’s guide for evaluating and assessing federal agencies’ selection and management of information technology resources as well as OMB’s Capital Programming Guide (Version 1.0) Supplement to OMB Circular A-11, Part 3: Planning, Budgeting, and Acquisition of Capital Assets, dated July 1997. We also applied criteria from our investment guide to our review of seven VA information technology projects approved for funding in VA’s fiscal year 1999 budget cycle. These seven projects were selected based on a variety of factors, including some of interest to congressional oversight committees, some that exhibited potential duplication of project functionality, and the single highest cost departmentwide project. We interviewed key VA, VBA, VHA, NCS, and OMB officials regarding the department’s information technology investment process. Finally, to assess VA’s implementation of the CIO provision of the Clinger-Cohen Act, we analyzed (1) VA’s April 1997 Progress Report on the Department of Veterans Affairs CIO Program, (2) VA’s strategic, IRM, and information technology plans mentioned above, and (3) OMB documentation regarding CIOs. We also interviewed key VA, VBA, VHA, NCS, and OMB officials regarding the duties and responsibilities of CIOs. Helen Lew, Assistant Director Nabajyoti Barkakati, Technical Assistant Director John T. Christian, Senior Business Process Analyst Mary J. Dorsey, Information Systems Analyst-in-Charge Michael P. Fruitman, Communications Analyst Thomas M. McDonald, Senior Business Process Analyst John P. Rehberger, Senior Information Systems Analyst John A. Riley, Senior Business Process Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed how the Department of Veterans Affairs (VA) has implemented specific provisions of the Clinger-Cohen Act and other legislative reforms, including: (1) reengineering business processes before acquiring information technology; (2) completing an integrated information technology architecture; (3) institutionalizing a disciplined information technology investment decisionmaking process; and (4) appointing an agency Chief Information Officer (CIO). GAO noted that: (1) VA has not fully implemented critical provisions of the Clinger-Cohen Act and other legislative reforms; (2) although VA has taken some initial steps, it has not adequately implemented these legislative reforms; (3) specifically, the Clinger-Cohen Act requires agencies to analyze their mission-related and administrative processes, and on the basis of this analysis, revise and improve these processes before making significant investments in supporting information technology; (4) although GAO's business process reengineering guide states that agencies should have an overall business process improvement strategy to accomplish reengineering, VA has not developed such a strategy; (5) VA also has not yet defined the departmentwide integrated information technology architecture needed to efficiently utilize information systems across the department; (6) in addition, VA has not institutionalized a disciplined process for selecting, controlling, and evaluating information technology as investments as required by the Clinger-Cohen Act; (7) specifically, VA decisionmakers did not have current and complete information such as cost, benefit, schedule, risk, and performance data at the project level, which is essential to making sound investment decisions; (8) in addition, VA's process for controlling and evaluating its investment portfolio is incomplete and, as a result, decisionmakers do not have the information needed to: (a) detect or avoid problems early; and (b) improve VA's investment process; (9) as a consequence, the department does not know whether it is making the right investments, how to control these investments effectively, or whether these investments have provided mission-related benefits in excess of their costs; (10) although the Clinger-Cohen Act requires agencies' CIOs to have information management as their primary duty, the responsibilities of VA's CIO are not limited primarily to information management; (11) instead, the CIO also functions as the department's Assistant Secretary for Management and Chief Financial Officer; and (12) as a result, information technology issues are not addressed promptly.
You are an expert at summarizing long articles. Proceed to summarize the following text: The BRAC 2005 process consisted of a series of legislatively prescribed steps as follows: DOD proposed the selection criteria. DOD was required to propose the selection criteria to be used to develop and evaluate the candidate recommendations, consistent with considerations specified in the statute authorizing BRAC 2005. The criteria were to be made available for public comment in the Federal Register. Congress subsequently codified the eight final BRAC selection criteria used in BRAC 2005. The BRAC statute directed GAO to evaluate the selection criteria. Figure 1 displays the eight criteria. Importantly, Congress specified that the first four criteria relating to enhancing military value were to be the priority criteria. DOD developed a force structure plan and infrastructure inventory. Congress required the Secretary of Defense to develop and submit to Congress a force structure plan laying out the numbers, size, and composition of the units that constitute U.S. defense forces—for example, divisions, ships, and air wings—based on the Secretary’s assessment of the probable national security threats over the ensuing 20 year period, and an inventory of global military installations. The BRAC statute directed GAO to evaluate the force structure plan and infrastructure inventory. Secretary of Defense was required to provide certain certifications. On the basis of the force structure plan, infrastructure inventory, and accompanying analyses, the Secretary of Defense was required to certify whether the need existed for the closure or realignment of military installations. If the Secretary certified that the need existed, he was also required to certify that the round of closures and realignments would result in annual net savings for each of the military departments beginning not later than fiscal year 2011. The BRAC statute directed GAO to evaluate the need for the 2005 BRAC round. DOD began to develop options for closure or realignment recommendations. The military departments developed service-specific installation closure and realignment options. In addition, OSD established seven joint cross-service teams, called joint cross-service groups, to develop options across common business-oriented functions, such as medical services, supply and storage, and administrative activities. These closure and realignment options were reviewed by DOD’s Infrastructure Executive Council—a senior-level policy-making and oversight body for the entire process. Options approved by this council were submitted to the Secretary of Defense for his review and approval. DOD developed hundreds of closure or realignment options for further analysis, which eventually led to DOD’s submitting over 200 recommendations to the BRAC Commission for analysis and review. The BRAC statute directed GAO to analyze the recommendations of the Secretary and the selection process, and we issued our report to the congressional defense committees on July 1, 2005. BRAC Commission performed an independent review of DOD’s recommendations. After DOD selected its base closure and realignment recommendations, it submitted them to the BRAC Commission, which performed an independent review and analysis of DOD’s recommendations. The Commission could approve, modify, reject, or add closure and realignment recommendations. Also, the BRAC Commission provided opportunities to interested parties, as well as community and congressional leaders, to provide testimony and express viewpoints. The Commission then voted on each individual closure or realignment recommendation, and those that were approved were included in the Commission’s report to the President. In 2005, the BRAC Commission reported that it had rejected or modified about 14 percent of DOD’s closure and realignment recommendations. President approved BRAC recommendations. After receiving the recommendations, the President was to review the recommendations of the Secretary of Defense and the Commission and prepare a report by September 23, 2005, containing his approval or disapproval of the Commission’s recommendations as a whole. Had the President disapproved of the Commission’s recommendations, the Commission would have had until October 20, 2005, to submit a revised list of recommendations to the President for further consideration. If the President had not submitted a report to Congress of his approval of the Commission’s recommendations by November 7, 2005, the BRAC process would have been terminated. The President submitted his report and approval of the 2005 Commission’s recommendations on September 15, 2005. Congress allowed the recommendations to become binding. After the President transmitted his approval of the Commission’s recommendations to Congress, the Secretary of Defense would have been prohibited from implementing the recommendations if Congress had passed a joint resolution of disapproval within 45 days of the date of the President’s submission or the adjournment of Congress for the session, whichever was sooner. Since Congress did not pass such a resolution, the recommendations became binding in November 2005. Congress established clear time frames for implementation. The BRAC legislation required DOD to complete recommendations for closing or realigning bases made in BRAC 2005 by September 15, 2011—6 years from the date the President submitted his approval of the recommendations to Congress. Figure 2 displays the timeline of the BRAC 2005 round. GAO identified several factors and challenges that contributed to DOD’s implementation of BRAC 2005 and the results achieved. In contrast to other BRAC rounds that were primarily focused on achieving savings by reducing excess infrastructure, the Secretary of Defense identified three goals for BRAC 2005. Specifically, BRAC 2005 was intended to transform the military, foster jointness, and reduce excess infrastructure to produce savings. These goals and the primary selection criteria’s focus on enhancing military value led DOD to identify numerous recommendations that were designed to be transformational and enhance jointness, thereby adding to the complexity the Commission and DOD faced in finalizing and implementing the BRAC recommendations. Some key challenges that have confronted or continue to confront DOD or the Commission in regard to BRAC 2005 are as follows. Some transformational-type BRAC recommendations required sustained senior leadership attention and a high level of coordination among many stakeholders to complete by the required date. The consolidation of supply, storage, and distribution functions within the Defense Logistics Agency is an example of an atypical use of the BRAC process. The supply, storage, and distribution BRAC recommendation is transformational because it focuses on complex business process reengineering efforts involving the transfer of personnel and management functions. As we previously reported, the Defense Logistics Agency was faced with the potential for disruptions to depot operations during implementation of the BRAC consolidation recommendation and took certain steps we have identified as best practices to minimize the potential for disruption. These included committing sustained high-level leadership and including relevant stakeholders in an organizational structure to address implementation challenges as they arose. To implement the BRAC recommendations, the agency had to develop strategic agreements with the services that ensured that all stakeholders agreed on its plans for implementation, and had to address certain human capital and information technology challenges. Similarly, another type of transformational BRAC recommendation that required sustained senior leadership attention was the establishment of the Navy’s Fleet Readiness Centers. DOD expects this BRAC recommendation to produce significant savings; however, as we reported, this BRAC recommendation required sustained senior leadership attention to ensure effective completion. Our prior work states that sustained leadership is necessary to achieve workforce reorganizations and agency goals. Implementation of some transformational BRAC recommendations— especially those where a multitude of organizations and units all had roles to play to ensure the achievement of the goals of the recommendation— illustrated the need to involve key stakeholders and effective planning. For example, to transform the reserve forces in many states, the Army had planned to implement 44 BRAC recommendations to construct 125 new Armed Forces Reserve Centers by September 15, 2011. As we previously reported, the Army identified several potential challenges, including completing all of the construction within the statutory implementation period, changing force structure and mission requirements that could affect the capacity of the new centers, and realizing efficiencies based on limited testing of new construction processes. Conversely, as we also previously reported, the Air Force used a consultative process that involved stakeholders to assign new missions to units that would lose flying missions as a result of 37 BRAC recommendations affecting 56 Air National Guard installations. As a result of this consultative process, Air National Guard units affected by BRAC 2005 were assigned replacement missions, of which 83 percent were highest priority, mission-critical missions, or a new flying mission. However, implementation of these BRAC recommendations led to other challenges that required significant stakeholder coordination. These challenges included the capacity of Air National Guard headquarters to develop new unit staffing documents, the need to retrain personnel for an intelligence mission at a rate that exceeded the capacity of the relevant school, and that Air National Guard Headquarters had not identified bridge missions for all units that will face a delay between losing their old flying mission and the startup of their replacement mission. Establishing a specific organizational structure to overcome likely obstacles and help achieve desired goals. OSD emphasized the need for joint cross-service groups to analyze common business-oriented functions for BRAC 2005, an approach made more important by the desire to develop transformational BRAC recommendations. As with the 1993 and 1995 BRAC rounds, these joint cross-service groups performed analyses and developed closure and realignment options in addition to those developed by the military services. However, our evaluation of DOD’s 1995 round indicated that the joint cross-service groups submitted options through the military services for approval, resulting in few being approved. Conversely, the number of BRAC recommendations developed by the joint cross-service groups increased significantly in the BRAC 2005 round. This was due, in part, to high-level leadership ensuring that the options were reviewed by a DOD senior-level group, known as the Infrastructure Steering Group, rather than the military services. As shown in figure 3, the Infrastructure Steering Group was placed organizationally on par with the military departments. DOD had to develop BRAC oversight mechanisms to improve accountability for implementation of the BRAC recommendations. For the first time, OSD required the military departments to develop business plans to better inform OSD of financial and status of implementation details for each of the BRAC 2005 recommendations and to facilitate OSD oversight. These business plans included information such as a listing of all actions needed to implement each recommendation; schedules for personnel relocations between installations; and updated cost and savings estimates by DOD based on more accurate and current information. This approach permitted senior-level intervention if warranted to ensure completion of the BRAC recommendations by the statutory completion date. Additionally, OSD recognized that the business plans would serve as the foundation for the complex program management necessary to implement the particularly complex transformational BRAC 2005 recommendations, and to delineate resource requirements and generate military construction requirements. Interdependent recommendations affected DOD’s ability to meet the statutory deadline. Many of the BRAC 2005 recommendations were interdependent and had to be completed in a sequential fashion within the statutory implementation period. In cases where interdependent recommendations required multiple relocations of large numbers of personnel, delays in completing one BRAC recommendation had a cascading effect on the implementation of other recommendations. Specifically, DOD had to synchronize the relocations of over 123,000 people with about $24.7 billion in new construction or renovation. Commission officials told us that unlike prior BRAC rounds where each base was handled by a single integrated recommendation, in BRAC 2005, many installations were simultaneously affected by multiple interconnected BRAC recommendations. For example, as we have previously reported, as part of the BRAC recommendation to close Fort Monmouth, New Jersey, personnel from the Army’s Communications- Electronics Life Cycle Management Command located at Fort Monmouth were to relocate to Aberdeen Proving Ground, Maryland. To accommodate the incoming personnel from Fort Monmouth, Army officials planned to renovate facilities that were occupied at the time by a training activity that was to relocate to Fort Lee, Virginia, as part of another BRAC recommendation. However, delays in completing new facilities at Fort Lee delayed the relocation of the training activity from Aberdeen, which in turn delayed the renovation of the Aberdeen facilities to support the Fort Monmouth closure. Similarly, two buildings at Fort Belvoir, Virginia, were to house certain Army organizations moving from leased space as part of a BRAC recommendation. However, the buildings at Fort Belvoir were occupied at the time by the Army Materiel Command, which was to relocate to Huntsville, Alabama, as part of another BRAC recommendation. Construction delays at the Huntsville location delayed the command’s ability to move, which in turn delayed renovation of the space they were to vacate, consequently holding up the ability of the new occupants to relocate from the leased space. Given the complexity of these interdependent recommendations, OSD required the military services and defense agencies to periodically brief it on implementation challenges and progress. Some complex sets of individual actions were combined within individual BRAC recommendations, complicating the Commission’s review process. The scale of BRAC 2005 posed a number of challenges to the Commission as it did its independent review. First, the Commission reported that it assessed closure and realignment recommendations of unprecedented scope and complexity. Further, the executive staff of the BRAC Commission told us that their task was made more difficult and complex because many of the proposed recommendations put forward for BRAC 2005 represented the DOD goals of furthering transformation and fostering jointness, in addition to the more traditional base closures and realignments. Moreover, many of the proposed BRAC recommendations that DOD presented to the Commission for review were made up of multiple individual actions, unlike prior rounds in which each base was handled by a single integrated recommendation, according to the BRAC Commission. The executive staff of the Commission also told us that it was more difficult to assess the costs and the amount of time for the savings to offset implementation costs since many of the recommendations contained multiple interdependent actions, all of which needed to be reviewed. Table 1 compares the number of individual actions embedded within the BRAC 2005 recommendations with the number of similar actions needed to implement the recommendations in the prior rounds. The table shows that the number of individual BRAC actions was larger in BRAC 2005 (813) than that from the four prior BRAC rounds combined (387). Large size of BRAC 2005 may have contributed to the challenges confronting the Commission. The Commission executive staff that we interviewed said that they would have benefited from expertise built up during the multiple successive smaller BRAC rounds that occurred in 1991, 1993, and 1995, since the Commission staff stayed in place from one round to the next. However, because 10 years had elapsed since the last BRAC round, many Commission staff were new to BRAC in 2005 and had steep learning curves. This may have been compounded by the large number and variety of BRAC actions DOD presented to them for review. For example, the Commission reported that it struggled to fully understand the net impact on bases that were both gaining and losing missions at the same time, as in the interdependent BRAC recommendations discussed above. While the Commission had the authority to modify a BRAC recommendation, the Commission staff expressed concern that rejecting one action of a recommendation could potentially set off a cascade of effects rippling across several other proposed recommendations because of the interdependency of the individual actions. The effect on communities from installation growth has led to challenges for the communities to ensure the provision of adequate services to the installation. DOD’s Office of Economic Adjustment and DOD have devoted more resources to communities experiencing significant growth as a result of the consolidation that occurred under BRAC 2005. This is a change from prior BRAC rounds, when Office of Economic Adjustment assistance was more focused on helping communities cope with the closure of an installation than its growth. While some of the growth is attributable to initiatives other than BRAC, including increases in Army and Marine Corps force structure after 2007 and plans to rebase some overseas forces to the United States, BRAC has contributed with the transfer of about 123,000 positions from one installation to another within the 6-year BRAC implementation period. As we have previously reported, communities experiencing growth were hindered in their ability to effectively plan for off-base support such as adequate roads and schools due to inconsistent information from DOD around the 2007 time frame. Further, DOD has missed opportunities to offer high-level leadership to communities affected by the growth, suggesting the need for more attention to this issue if a future set of BRAC recommendations leads to installation growth rather than closure. Our analysis of DOD’s fiscal year 2011 BRAC 2005 budget submission to Congress and each annual submission throughout the BRAC 2005 implementation period shows that one-time implementation costs grew from $21 billion originally estimated by the BRAC Commission in 2005 to In about $35.1 billion, an increase of about $14.1 billion, or 67 percent.constant 2005 dollars, costs increased to about $32.2 billion, an increase of 53 percent. According to an OSD analysis of the increase in costs, about $10 billion of the increase was attributable to construction for additional facilities, increasing total military construction costs to about $24.7 billion. In contrast, military construction costs for the four prior BRAC rounds combined amounted to less than $7 billion. In a March 2010 testimony, the Deputy Under Secretary of Defense (Installations and Environment) characterized the military construction for BRAC 2005 as a major engine of recapitalization. Other reasons for the cost increases include inflation and increased operations and maintenance, environmental restoration, and other costs. Some cost increases have been attributed to unexpected expenses. For example, DOD’s cost to implement the recommendation to close the Walter Reed Medical Center in Washington, D.C., and relocate medical care functions to the National Naval Medical Center, Bethesda, Maryland, and Fort Belvoir, Virginia, increased from about $989 million to about $2.7 billion due to higher military construction costs and other higher than anticipated costs for moving and purchasing equipment, as we previously Moreover, military construction costs to close Fort Monmouth, reported. New Jersey, increased by $613.2 million from the BRAC Commission estimate. One part of this recommendation included relocating the U.S. Army Military Academy Preparatory School from Fort Monmouth to West Point, New York, and part of the reason for the cost growth was that the scope of the facility construction increased from approximately 80,000 square feet to more than 250,000 square feet, and planning officials identified the need to spend additional money for rock removal needed for site preparation. GAO, Military Base Realignments and Closures: Estimated Costs Have Increased While Savings Estimates Have Decreased Since Fiscal Year 2009, GAO-10-98R (Washington, D.C.: Nov. 13, 2009). 9.5 percent decrease from the Commission’s estimate. The 20-year net present value savings estimated by the Commission in 2005 for this BRAC round have decreased by 73 percent to about $9.9 billion. Some recommendations were acknowledged to be unlikely to produce savings in the 20-year net present value window. For example, the Commission approved 30 recommendations that were based on perceived high military value and were not expected to result in 20-year payback. However, our analysis of DOD’s 2011 BRAC budget data shows that currently 77 out of 182 Commission-approved BRAC 2005 recommendations, or about 42 percent, are now not expected to pay back in the same 20-year period. In contrast, only four recommendations DOD developed in all four prior BRAC rounds combined were not expected to result in a 20-year payback. Finally, our analysis of the fiscal year 2011 BRAC budget shows that DOD will not recoup its up-front costs to implement BRAC recommendations until 2018—5 years later than the BRAC Commission estimates show it would take to pay back. OSD officials told us that despite producing lower savings than anticipated, the department expects that the implementation of BRAC 2005 recommendations will produce capabilities that will enhance military value, defense operations, and defense management. As directed by the House Armed Services Committee’s report accompanying the National Defense Authorization Act for 2008, we are continuing to analyze the results from BRAC 2005 to identify lessons learned. These lessons may be useful as Congress considers whether to authorize additional BRAC rounds and would similarly be useful to DOD in implementing recommendations from any future rounds. We will be reporting these lessons learned later this year. Chairman Forbes, Ranking Member Bordallo, and Members of the Subcommittee, I thank you for inviting me to testify today. This concludes my prepared statement. I will be pleased to answer any questions that you may have at this time. For future questions about this statement, please contact me on (202) 512-4523 or [email protected]. Individuals making key contributions to this statement include Laura Talbott, Assistant Director; Vijay Barnabas; John Beauchamp; John Clary; Brandon Jones; Greg Marchand; Charles Perdue; Robert Poetta; Paulina Reaves; John Trubey; and Erik Wilkins- McKee. Excess Facilities: DOD Needs More Complete Information and a Strategy to Guide Its Future Disposal Efforts. GAO-11-814. Washington, D.C.: September 19, 2011. Military Base Realignments and Closures: Review of the Iowa and Milan Army Ammunition Plants. GAO-11-488R. Washington. D.C.: April 1, 2011. GAO’s 2011 High-Risk Series: An Update. GAO-11-394T. Washington, D.C.: February 17, 2011. Defense Infrastructure: High-Level Federal Interagency Coordination Is Warranted to Address Transportation Needs beyond the Scope of the Defense Access Roads Program. GAO-11-165. Washington, D.C.: January 26, 2011. Military Base Realignments and Closures: DOD Is Taking Steps to Mitigate Challenges but Is Not Fully Reporting Some Additional Costs. GAO-10-725R. Washington, D.C.: July 21, 2010. Defense Infrastructure: Army Needs to Improve Its Facility Planning Systems to Better Support Installations Experiencing Significant Growth. GAO-10-602. Washington, D.C.: June 24, 2010. Military Base Realignments and Closures: Estimated Costs Have Increased while Savings Estimates Have Decreased Since Fiscal Year 2009. GAO-10-98R. Washington, D.C.: November 13, 2009. Military Base Realignments and Closures: Transportation Impact of Personnel Increases Will Be Significant, but Long-Term Costs Are Uncertain and Direct Federal Support Is Limited. GAO-09-750. Washington, D.C.: September 9, 2009. Military Base Realignments and Closures: DOD Needs to Update Savings Estimates and Continue to Address Challenges in Consolidating Supply- Related Functions at Depot Maintenance Locations. GAO-09-703. Washington, D.C.: July 9, 2009. Defense Infrastructure: DOD Needs to Periodically Review Support Standards and Costs at Joint Bases and Better Inform Congress of Facility Sustainment Funding Uses. GAO-09-336. Washington, D.C.: March 30, 2009. Military Base Realignments and Closures: DOD Faces Challenges in Implementing Recommendations on Time and Is Not Consistently Updating Savings Estimates. GAO-09-217. Washington, D.C.: January 30, 2009. Military Base Realignments and Closures: Army Is Developing Plans to Transfer Functions from Fort Monmouth, New Jersey, to Aberdeen Proving Ground, Maryland, but Challenges Remain. GAO-08-1010R. Washington, D.C.: August 13, 2008. Defense Infrastructure: High-Level Leadership Needed to Help Communities Address Challenges Caused by DOD-Related Growth. GAO-08-665. Washington, D.C.: June 17, 2008. Defense Infrastructure: DOD Funding for Infrastructure and Road Improvements Surrounding Growth Installations. GAO-08-602R. Washington, D.C.: April 1, 2008. Military Base Realignments and Closures: Higher Costs and Lower Savings Projected for Implementing Two Key Supply-Related BRAC Recommendations. GAO-08-315. Washington, D.C.: March 5, 2008. Defense Infrastructure: Realignment of Air Force Special Operations Command Units to Cannon Air Force Base, New Mexico. GAO-08-244R. Washington, D.C.: January 18, 2008. Military Base Realignments and Closures: Estimated Costs Have Increased and Estimated Savings Have Decreased. GAO-08-341T. Washington, D.C.: December 12, 2007. Military Base Realignments and Closures: Cost Estimates Have Increased and Are Likely to Continue to Evolve. GAO-08-159. Washington, D.C.: December 11, 2007. Military Base Realignments and Closures: Impact of Terminating, Relocating, or Outsourcing the Services of the Armed Forces Institute of Pathology. GAO-08-20. Washington, D.C.: November 9, 2007. Military Base Realignments and Closures: Transfer of Supply, Storage, and Distribution Functions from Military Services to Defense Logistics Agency. GAO-08-121R. Washington, D.C.: October 26, 2007. Defense Infrastructure: Challenges Increase Risks for Providing Timely Infrastructure Support for Army Installations Expecting Substantial Personnel Growth. GAO-07-1007. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Plan Needed to Monitor Challenges for Completing More Than 100 Armed Forces Reserve Centers. GAO-07-1040. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Observations Related to the 2005 Round. GAO-07-1203R. Washington, D.C.: September 6, 2007. Military Base Closures: Projected Savings from Fleet Readiness Centers Are Likely Overstated and Actions Needed to Track Actual Savings and Overcome Certain Challenges. GAO-07-304. Washington, D.C.: June 29, 2007. Military Base Closures: Management Strategy Needed to Mitigate Challenges and Improve Communication to Help Ensure Timely Implementation of Air National Guard Recommendations. GAO-07-641. Washington, D.C.: May 16, 2007. Military Base Closures: Opportunities Exist to Improve Environmental Cleanup Cost Reporting and to Expedite Transfer of Unneeded Property. GAO-07-166. Washington, D.C.: January 30, 2007. Military Bases: Observations on DOD’s 2005 Base Realignment and Closure Selection Process and Recommendations. GAO-05-905. Washington, D.C.: July 18, 2005. Military Bases: Analysis of DOD’s 2005 Selection Process and Recommendations for Base Closures and Realignments. GAO-05-785. Washington, D.C.: July 1, 2005. Military Base Closures: Observations on Prior and Current BRAC Rounds. GAO-05-614. Washington, D.C.: May 3, 2005. Military Base Closures: Updated Status of Prior Base Realignments and Closures. GAO-05-138. Washington, D.C.: January 13, 2005. Military Base Closures: Assessment of DOD’s 2004 Report on the Need for a Base Realignment and Closure Round. GAO-04-760. Washington, D.C.: May 17, 2004. Military Base Closures: Observations on Preparations for the Upcoming Base Realignment and Closure Round. GAO-04-558T. Washington, D.C.: March 25, 2004. Defense Infrastructure: Long-term Challenges in Managing the Military Construction Program. GAO-04-288. Washington, D.C.: February 24, 2004. Military Base Closures: Better Planning Needed for Future Reserve Enclaves. GAO-03-723. Washington, D.C.: June 27, 2003. Defense Infrastructure: Changes in Funding Priorities and Management Processes Needed to Improve Condition and Reduce Costs of Guard and Reserve Facilities. GAO-03-516. Washington, D.C.: May 15, 2003. Defense Infrastructure: Changes in Funding Priorities and Strategic Planning Needed to Improve the Condition of Military Facilities GAO-03-274. Washington, D.C.: February 19, 2003. Defense Infrastructure: Greater Management Emphasis Needed to Increase the Services’ Use of Expanded Leasing Authority. GAO-02-475. Washington, D.C.: June 6, 2002. Military Base Closures: Progress in Completing Actions from Prior Realignments and Closures. GAO-02-433. Washington, D.C.: April 5, 2002. Military Base Closures: Overview of Economic Recovery, Property Transfer, and Environmental Cleanup. GAO-01-1054T. Washington, D.C.: August 28, 2001. Military Base Closures: DOD’s Updated Net Savings Estimate Remains Substantial. GAO-01-971. Washington, D.C.: July 31, 2001. Military Base Closures: Lack of Data Inhibits Cost-Effectiveness of Analyses of Privatization-in Place Initiatives. GAO/NSIAD-00-23. Washington, D.C.: December 20, 1999. Military Bases: Status of Prior Base Realignment and Closure Rounds. GAO/NSIAD-99-36. Washington, D.C.: December 11, 1998. Military Bases: Review of DOD’s 1998 Report on Base Realignment and Closure. GAO/NSIAD-99-17. Washington, D.C.: November 13, 1998. Navy Depot Maintenance: Privatizing Louisville Operations in Place Is Not Cost-Effective. GAO/NSIAD-97-52. Washington, D.C.: July 31, 1997. Military Bases: Lessons Learned From Prior Base Closure Rounds. GAO/NSIAD-97-151. Washington, D.C.: July 25, 1997. Military Base Closures: Reducing High Costs of Environmental Cleanup Requires Difficult Choices. GAO/NSIAD-96-172. Washington, D.C.: September 16, 1996. Military Bases: Closure and Realignment Savings Are Significant, but Not Easily Quantified. GAO/NSIAD-96-67. Washington, D.C.: April 8, 1996. Military Bases: Analysis of DOD’s 1995 Process and Recommendations for Closure and Realignment. GAO/NSIAD-95-133S. Washington, D.C.: April 14, 1995. Military Bases: Analysis of DOD’s Recommendations and Selection Process for Closures and Realignments. GAO/NSIAD-93-173. Washington, D.C.: April 15, 1993. Military Bases: Observations on the Analyses Supporting Proposed Closures and Realignments. GAO/NSIAD-91-224. Washington, D.C.: May 15, 1991. Military Bases: An Analysis of the Commission’s Realignment and Closure Recommendations. GAO/NSIAD-90-42. Washington, D.C.: November 29, 1989. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Defense (DOD) has faced long-term challenges in managing and halting degradation of its portfolio of facilities and reducing unneeded infrastructure to free up funds to better maintain the facilities it still uses and to meet other needs. Costs to build and maintain the defense infrastructure represent a significant financial commitment. DOD’s management of its support infrastructure is on GAO’s high-risk list, in part because of the challenges DOD faces in reducing its unneeded excess and obsolete infrastructure. DOD plans to reduce force structure and the President will request that Congress authorize the base realignment and closure (BRAC) process for 2013 and 2015. The Secretary of Defense stated that the BRAC process is the only effective way to achieve needed infrastructure savings. This testimony discusses (1) key factors and challenges that contributed to BRAC 2005 implementation and results and (2) the most recent estimated costs and savings attributable to BRAC 2005. To do this work, GAO reviewed its previous work and selected documents related to BRAC 2005 such as BRAC business plans that laid out the requisite actions, timing of those actions, and DOD’s estimated costs and savings associated with implementing each recommendation, briefings on BRAC implementation status prepared by the military services, and budget justification materials submitted to Congress. GAO also interviewed current and former officials from DOD and the BRAC Commission involved in the development, review, and implementation of BRAC recommendations. GAO identified several factors and challenges that contributed to the Department of Defense’s (DOD) implementation of Base Realignment and Closure (BRAC) 2005 and the results achieved. In contrast to other BRAC rounds that were primarily focused on achieving savings by reducing excess infrastructure, the Secretary of Defense identified three goals for BRAC 2005. Specifically, BRAC 2005 was intended to (1) transform the military, (2) foster jointness, and (3) reduce excess infrastructure to produce savings. These goals and the primary selection criteria’s focus on enhancing military value led DOD to identify numerous recommendations that were designed to be transformational and enhance jointness, thereby adding to the complexity the BRAC Commission and DOD faced in finalizing and implementing the recommendations. Some transformational-type recommendations needed sustained attention by DOD and significant coordination and planning among multiple stakeholders. To improve oversight of implementation of the recommendations, the Office of the Secretary of Defense (OSD) required business plans for each BRAC 2005 recommendation to better manage implementation. In addition, DOD developed recommendations that were interdependent on each other. However, this led to challenges across multiple recommendations when delays in completing one recommendation led to delays in completing others. Specifically, DOD had to synchronize the relocations of over 123,000 people with about $24.7 billion in new construction or renovation at installations. Given the complexity of some BRAC recommendations, OSD directed the services to periodically brief it on implementation challenges. Furthermore, the scale of BRAC 2005 posed a number of challenges to the Commission as it conducted its independent review. For example, it reported that DOD’s recommendations were of unprecedented scope and complexity, compounding the difficulty of its review. Moreover, the interdependent nature of some recommendations made it difficult for the Commission to evaluate the effect on installations that were both gaining and losing units simultaneously. Finally, the effect on communities from installation growth has led to challenges. For example, communities experiencing growth were hindered in their ability to effectively plan for off-base support such as adequate roads and schools due to inconsistent information from DOD around the 2007 time frame. DOD’s fiscal year 2011 BRAC 2005 budget submission to Congress shows that costs to implement the BRAC recommendations grew from $21 billion originally estimated by the BRAC Commission in 2005 dollars to about $35.1 billion in current dollars, an increase of about $14.1 billion, or 67 percent. In constant 2005 dollars, costs increased to $32.2 billion, an increase of 53 percent. Costs increased mostly due to military construction as DOD identified the need for new and renovated facilities to enhance capabilities. In 2005, the Commission estimated net annual recurring savings of $4.2 billion and a 20-year net present value savings by 2025 of $36 billion. GAO’s analysis shows annual recurring savings are now about $3.8 billion, a decrease of 9.5 percent, while the 20-year net present value savings are now about $9.9 billion, a decrease of 73 percent. As such, DOD will not recoup its up-front costs until 2018.
You are an expert at summarizing long articles. Proceed to summarize the following text: Traditionally, DOD’s strategy for acquiring major weapon systems has been to plan programs that would achieve a big leap forward in capability. However, because the needed technologies often are not yet mature, programs stay in development for years until the technologies are demonstrated. As a result, weapon systems have frequently been characterized by poor cost, schedule, and performance outcomes. This has slowed modernization efforts, reduced the buying power of the defense dollar, delayed capabilities for the warfighter, and forced unplanned—and possibly unnecessary—trade-offs among programs. Our extensive body of work shows that leading companies use a product development model that helps reduce risks and increase knowledge when developing new products. This best practices model enables decision makers to be reasonably certain about their products at critical junctures during development and helps them make informed investment decisions. This knowledge-based process can be broken down into three cumulative knowledge points. Knowledge point 1: A match must be made between the customer’s needs and the developer’s available resources—technology, engineering knowledge, time, and funding—before a program starts. Knowledge point 2: The product’s design must be stable and must meet performance requirements before initial manufacturing begins. Knowledge point 3: The product must be producible within cost, schedule, and quality targets and demonstrated to be reliable before production begins. To bolster the knowledge-based process, leading companies use evolutionary product development, an incremental approach that enables developers to rely more on available resources rather than making promises about unproven technologies. While the user may not initially receive the ultimate capability under this approach, the initial product is available sooner and at a lower, more predictable cost. Also, leading companies know that invention cannot be scheduled and its cost is difficult to estimate. They do not bring technology into new product development unless that technology has been demonstrated to meet the user’s requirements. Allowing technology development to spill over into product development puts an extra burden on decision makers and provides a weak foundation for making product development estimates. DOD understands that it must improve acquisition process outcomes if it is to modernize its forces within currently projected resources. To help achieve this goal, DOD has revised its acquisition policy, called the 5000 series, to reflect best practices from successful commercial and DOD programs. The policy covers most—but not all—major acquisitions. The Secretary of Defense has delegated authority to the Missile Defense Agency and to the National Security Space Team to develop separate guidance for missile defense and space systems, respectively.Approximately 35 percent of DOD’s development funds in 2003 went to these systems. (Figure 1 shows how $43.1 billion in development funds were distributed across space, missile defense, and systems covered by the 5000 series.) This report addresses policy for the defense programs covered exclusively under the 5000 series. DOD’s leaders have made significant improvements to DOD’s acquisition policy by adopting the knowledge-based, evolutionary approach used by leading commercial companies. The revised policy has the potential to transform DOD’s acquisition process by reducing risks and increasing the chances for successful outcomes. The policy provides a framework for developers to ask themselves at key decision points whether they have the knowledge they need to move to the next phase of acquisition. If rigorously applied, this knowledge-based framework can help managers gain the confidence they need to make significant and sound investment decisions for major weapon systems. In placing greater emphasis on evolutionary product development, the policy sets up a more manageable environment for achieving knowledge. Another best practice reflected in the policy’s framework is separating technology development from product development, which reduces technological risk at the start of a program. As shown in table 1, DOD’s policy emphasizes best practices used by leading companies. Similar to the best practices model, DOD’s policy divides its acquisition process into phases, as shown in figure 2. Key decisions are aligned with the three critical junctures of a product’s development, or knowledge points. In other similarities, DOD’s framework pinpoints program start at milestone B, about the same point as program start on the best practices model. At the midway point on both approaches, a stable product design should be demonstrated. With DOD’s framework, managers are required to know—by the time full-rate production decision review occurs—whether the product can be produced within cost, schedule, and quality targets. This requirement occurs earlier in the best practices model, before production begins, or at knowledge point 3. Leading companies have used this approach to reduce risks and to make costs and delivery dates more predictable. While DOD has strengthened its acquisition policy with a knowledge- based, evolutionary framework, the policy does not include many of the same controls that leading companies rely on to attain a high level of knowledge before making additional significant investments. Controls are considered effective if they are backed by measurable criteria and if decision makers are required to consider them before deciding to advance a program to the next level. Controls used by leading companies help decision makers gauge progress in meeting cost, schedule, and performance goals and ensure that managers will (1) conduct activities to capture relevant product development knowledge, (2) provide evidence that knowledge was captured, and (3) hold decision reviews to determine that appropriate knowledge was captured to move to the next phase. To determine if DOD has the necessary controls, we compared controls in DOD’s policy with those used in the best practices model at three critical junctures. Table 2 shows the presence or absence of controls for various versions of DOD policy since 1996, including the May 2003 revision. At all three knowledge points, DOD’s policy does not provide all the necessary controls used by commercial companies. For example, at program launch (milestone B) or when knowledge point 1 should be reached, the policy requires decision makers to identify and validate a weapon system’s key performance requirements and to have a technical solution for the system before program start. This information is then used to form cost and schedule estimates for the product’s development. However, the policy does not emphasize the use of a disciplined systems engineering process for balancing a customer’s needs with resources to deliver a preliminary design. The lack of effective controls at knowledge point 1 could result in gaps between requirements and resources being discovered later in development. At the design readiness review or when knowledge point 2 should be reached, DOD’s policy does not require specific controls to document that a product is ready for initial manufacturing and demonstration. DOD’s policy suggests appropriate criteria, such as number of subsystem and system design reviews completed, percentage of drawings completed, planned corrective actions to hardware and software deficiencies, adequate development testing, completed failure modes and effects analysis, identification of key system characteristics and critical manufacturing processes, and availability of reliability targets and growth plans. However, these criteria are not required. For example, we found that a key indicator of a product’s design stability is the completion of 90 percent of the engineering drawings supported by design reviews. DOD’s policy does not require that a certain percentage of drawings or design reviews be completed to ensure the design is mature enough to enter the system demonstration phase. As a result, a decision maker has no benchmark to consider when deciding to advance a program to the next level of development. Finally, at production commitment or when knowledge point 3 should be reached, DOD’s policy does not require specific controls to document that a product can be manufactured to meet cost, schedule, and quality targets before moving into production. For example, the policy states there should be “no significant manufacturing risks” at the start of low-rate production but does not define what this means or how it is to be measured. DOD’s policy does not require the demonstrated control of manufacturing processes and the collection of statistical process control data until full-rate production begins but even then fails to specify a measurable control. Given that low-rate production can last several years, a significant number of products can be manufactured before processes are brought under control, creating a higher probability of poor cost and schedule outcomes. While supporting efforts to build more flexibility into the DOD acquisition process and to develop weapon systems using an evolutionary approach, Congress asked DOD to be more disciplined in its approach. The Defense Authorization Act for Fiscal Year 2003 required DOD to address (1) the way it plans to meet certain statutory and regulatory requirements for managing its major acquisition programs, (2) needed guidance for implementing spiral developments, and (3) technology readiness (at acquisition program initiation). DOD was responsive to all three requirements. With regard to the second requirement, a description of the process that would be used to independently validate that measurable exit criteria for applying a spiral development process have been met was unclear. DOD stated that the milestone decision authority provides that independent validation as part of DOD’s milestone approval process. DOD’s responses to the relevant sections of the act are summarized below. More detailed comparisons are provided in appendixes I, II, and III. Requirements: This section directed DOD to report on its plan to meet certain statutory and regulatory requirements for managing its major acquisition programs applying an evolutionary acquisition process. These include establishing and approving operational requirements and cost and schedule goals for each increment, meeting requirements for operational and live fire testing for each increment, and optimizing total system performance and minimizing total ownership costs. DOD response: In April 2003, DOD submitted its report reflecting how these requirements are addressed in its acquisition policy. According to the report, the policy addresses the statutory and other requirements applicable to all major defense acquisition programs, including each increment of evolutionary acquisition programs. For example, the policy requires that each program or increment of an evolutionary acquisition have a milestone B decision to approve program initiation and to permit entry into system development and demonstration. The policy specifies the statutory and regulatory information necessary to support the decision. Requirements: This section authorizes DOD to conduct a research and development program for a major defense acquisition program using spiral development only if approved by the Secretary of Defense or authorized high-level designee. A program cannot be conducted as a spiral development unless the Secretary of Defense or designee approves a plan that describes such things as the program strategy, test plans, performance parameters, and measurable exit criteria. The section also requires the Secretary of Defense to issue guidance addressing the appropriate processes for an independent validation that exit criteria have been met, the operational assessment of fieldable prototypes, and the management of these types of programs. It further requires the Secretary to report to Congress on the status of each program applying spiral development by September 30 of each year from 2003 to 2008. DOD response: DOD established a technology development strategy in the new policy to address this requirement. The strategy must be completed before a program can enter the technology development phase. The strategy also documents the cost and schedule goals, the test plans, the number of prototypes, and a program strategy for the total research and development program. The strategy requires a test plan to ensure the goals and exit criteria for the first technology spiral demonstration are met, and the policy requires an independent operational assessment for the release of each product increment to the user. What is unclear in DOD’s guidance is the process that will be used for independently validating whether measurable cost, schedule, and performance exit criteria have been met. However, DOD stated that the milestone decision authority provides independent validation that exit criteria have been met as part of DOD’s milestone approval process. As of October 23, 2003, DOD’s report on the status of each program applying spiral development was still in draft and not yet submitted. DOD’s current draft report states that there are no research and development programs that have been approved as spiral development programs as of September 30, 2003. Section 803 requirements were implemented in DOD Instruction 5000.2, which was effective in May 2003. DOD anticipates that there will be approved spiral development programs to report in 2004. Requirements: This section added a requirement to section 804 of the National Defense Authorization Act for Fiscal Year 2002 (Public Law 107-107) that directed DOD to report by March of each year between 2003 and 2006 on the maturity of technology at the initiation of major defense acquisition programs. Each report is required to (1) identify any major acquisition program that entered system development and demonstration during the preceding calendar year with immature key technology that was not demonstrated in, at minimum, a relevant environment, as required by the new policy; (2) justify the incorporation of any key technology on an acquisition program that does not meet that requirement; (3) identify any instances that the Deputy Under Secretary of Defense for Science and Technology did not concur with the technology assessment and explain how the issue has been or will be resolved; (4) identify each case in which a decision was made not to conduct an independent technology readiness assessment for a critical technology on a major defense acquisition; and (5) explain the reasons for the decision each year through 2006. DOD response: In March 2003, DOD reported that two programs entered system development and demonstration in 2002 with critical technologies that did not meet demonstration requirements and provided justification for them. DOD did not identify or report any cases where an independent technology readiness assessment was not conducted or where the Under Secretary disagreed with assessment findings. DOD can maximize its $1 trillion investment in new weapons over the next 6 years by ensuring effective implementation of the new acquisition policy. DOD’s leaders have taken noteworthy steps by incorporating into the policy a framework that supports a knowledge-based, evolutionary acquisition process, similar to one used by leading commercial companies to get successful outcomes. A framework is an important and significant step. DOD must now turn its attention to establishing controls. As leading companies have found, having clearly established controls to capture and use appropriate knowledge to make decisions at critical junctures is crucial for delivering affordable products as planned. DOD’s policy addresses specific congressional requirements and includes some controls that leading companies use to capture knowledge at the start of a program. However, additional controls are needed to ensure that decisions made throughout product development are informed by demonstrated knowledge. DOD must design and implement necessary controls to ensure that appropriate knowledge is captured and used at critical junctures to make decisions about moving a program forward and investing more money. We recommend that the Secretary of Defense require additional controls for capturing knowledge at three key points—program launch, design readiness review for transitioning from system integration to system demonstration, and production commitment. The additional controls for program launch (milestone B) should ensure the capture of knowledge about the following: Cost and schedule estimates based on knowledge from a preliminary design using systems engineering tools. The additional controls for transitioning from system integration to system demonstration (design readiness review) should ensure the capture of knowledge about the following: Completion of 90 percent of engineering drawings. Completion of subsystem and system design reviews. Agreement from all stakeholders that drawings are complete and the design is producible. Completion of failure modes and effects analysis. Reliability targets and a reliability growth plan based on demonstrated Identification of key system characteristics. Identification of critical manufacturing processes. reliability rates of components and subsystems. The additional controls for the production commitment (milestone C) should ensure the capture of knowledge about the following: Completion of production representative prototypes. Availability of production representative prototypes to achieve reliability goal and demonstrate the product in an operational environment. Collection of statistical process control data. Demonstration that critical manufacturing processes are capable and in statistical control. Because knowledge about technology, design, and manufacturing at critical junctures can lower DOD’s investment risk, decisions that do not satisfy knowledge-based criteria should be visible and justified. Therefore, we also recommend that the Secretary of Defense document the rationale for any decision to move a program to the next stage of development without meeting the knowledge-based criteria, including those listed in the first recommendation. The responsible milestone decision authority should justify the decision in the program’s acquisition decision memorandum and in a report to Congress. DOD provided us with written comments on a draft of this report. The comments appear in appendix IV. DOD partially concurred with our recommendation that the Secretary require additional controls for capturing knowledge at three key points: program launch, design readiness review for transitioning from system integration to system demonstration, and production. DOD stated that it agrees in principle with the advantages of using knowledge-based controls at key points in the acquisition process to assess risk and ensure readiness to proceed into the next phase of the acquisition process. DOD believes the current acquisition framework includes the controls necessary to achieve effective results, but it will continue to monitor the process to determine whether others are necessary to achieve the best possible outcomes. While we believe DOD’s effort to establish a solid framework for evolutionary acquisitions is a giant step forward, our work has shown that a disciplined application of controls in the process is needed to implement the framework if better acquisition outcomes are to be achieved. DOD’s policy does not include all the necessary controls to ensure a high level of product knowledge is attained and used for making decisions to move a program forward in the product development process. Leading product developers use additional controls, as listed in our first recommendation, to achieve the knowledge necessary to reduce risk to reasonable levels at critical junctures before making additional significant investments in product development. Simply monitoring the process may not be enough for DOD to achieve the best outcomes. Therefore, we are retaining our recommendation that the Secretary require additional controls at three critical points in the acquisition process. DOD also partially concurred with our recommendation that the Secretary document in each program’s acquisition decision memorandum and in a report to Congress the rationale for any decision to move a program to the next stage of development without meeting the knowledge-based criteria, including those described in the first recommendation. DOD agreed that it should record and be accountable for program decisions. Decision makers will continue to use the acquisition decision memorandum to document program decisions and the rationale for them. DOD did not concur with the need for a report outside the department. Because we believe strongly that knowledge-based criteria used to gauge a product’s development progress at critical junctures can lower DOD’s investment risks, we think it is important that decisions made without satisfying knowledge-based criteria be justified in a visible and transparent way to hold managers accountable for moving a program forward absent this knowledge. Therefore, we are retaining our recommendation for reporting the basis for decisions to move forward in a report to Congress. We reviewed DOD’s revised and past acquisition policies, DOD Directive 5000.1, DOD Instruction 5000.2, and DOD 5000.2-R, which provide management principles and mandatory policies and procedures for managing acquisitions programs. We contacted an official in the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics who is responsible for the development of the policy to better understand its content. We also reviewed information from the Defense Acquisition University that provided educational material on the policies. We reviewed the relevant sections of the Bob Stump National Defense Authorization Act for Fiscal Year 2003 and the accompanying Senate Armed Services Committee report to identify the requirements applicable to DOD’s acquisition policy. We compared these requirements with DOD’s responses to determine whether they have been addressed. Finally, we used information from more than 10 GAO products that examine how commercial best practices can improve outcomes for various DOD programs. During the past 6 years, we have gathered information based on discussions and visits with the following companies: Chrysler Ford Motor Motorola Hewlett-Packard Cummins Toyota Honda Boeing Commercial Airplane Group Bombardier Aerospace Hughes Space and Communication Xerox Caterpillar General Electric Aircraft Engines Harris Semiconductor Texas Instruments Varian Oncology Systems Ethicon-Endo Surgery (division of Johnson & Johnson) Although the approaches varied, these companies consistently applied the basic processes and standards in use. We compared this information with the acquisition framework and controls established by DOD’s policy. We concentrated on whether the policy provides a framework for a knowledge-based, evolutionary process and the controls necessary to carry out this intent. We conducted our review from April 2003 to September 2003 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; and the Director of the Office of Management and Budget. We will also provide copies to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4841 if you have any questions concerning this report. Other key contributors to this report were Lily Chin, Chris DePerro, Matt Lea, Mike Sullivan, and Adam Vodraska. Section 802 of the Defense Authorization Act for Fiscal Year 2003 required the Secretary of Defense to submit a report to Congress explaining how the Department of Defense (DOD) plans to meet certain statutory and regulatory requirements for acquisition programs following an evolutionary approach. In April 2003, the Secretary reported how these requirements were addressed in DOD’s policy (such as in tables of statutory and regulatory information requirements contained in enclosure 3 of Instruction 5000.2). According to the report, DOD’s policy requires that each program—including an increment of an evolutionary acquisition—have a milestone B decision to approve program initiation and to permit entry into systems development and demonstration. DOD’s policy specifies the statutory and regulatory information necessary to support the decision. We examined the policy to ensure the statutes and regulations identified in section 802 were addressed. Table 3 provides a list of the statutory and regulatory requirements identified in section 802, a corresponding document and page number where the requirement appears in DOD’s policy, and a description of the requirement from the policy. Section 802 also required DOD to report on its plans for addressing certain acquisition process issues regarding each increment of an evolutionary process. DOD reported on how it plans to establish and approve operational requirements and cost and schedule goals; meet requirements for operational and live fire testing; monitor cost and schedule performance; achieve interoperability; and consider total system performance and total ownership costs. We compared DOD’s response with section 802’s reporting requirements. As shown in table 4, DOD was responsive to the 802 requirements. Section 803 of the Defense Authorization Act for Fiscal Year 2003 authorized the Secretary of Defense to conduct major defense acquisition programs as spiral development programs. However, the section placed a limitation on these programs. It stated that a research and development program for a major acquisition may not be conducted as a spiral development program unless the Secretary of Defense or authorized high-level designee gives approval. The section requires the Secretary of Defense to issue guidance for the implementation of such programs to address appropriate processes for ensuring the independent validation of exit criteria being met, the operational assessment of fieldable prototypes, and the management of these types of programs. DOD responded to these requirements principally by incorporating into the acquisition policy the requirement for a technology development strategy. This strategy is a prerequisite for a project to enter the technology development phase of the acquisition process, or milestone A. Table 5 compares the spiral development plan requirements in the act with the technology development strategy requirements in DOD’s May 2003 acquisition policy. As shown in the table, DOD’s policy generally responded to the requirements in the act concerning guidance for implementation of spiral development programs. While the policy includes a technology development strategy that requires a test plan to ensure the goals and exit criteria for the first technology spiral demonstration are met and an independent operational assessment for the release of each product increment to the user, it is unclear what the process is for independently validating that cost, schedule, and performance exit criteria have been. However, DOD stated that the milestone decision authority provides independent validation that exit criteria have been met as part of DOD’s milestone approval process. Section 803 also requires that a spiral development plan include “pecific cost, schedule, and performance parameters, including measurable exit criteria, for the first spiral to be conducted.” DOD’s policy substituted “parameters” for “goals” and did not use the term “measurable” in describing the required exit criteria. Finally, section 803 requires the Secretary of Defense to submit to Congress by September 30 yearly from 2003 through 2008 a status report on each spiral development program. The report is to include information on unit costs for the projected prototypes. As of October 23, 2003, DOD’s report on the status of each program applying spiral development was still in draft and not yet submitted. DOD’s current draft report states that there are no research and development programs that have been approved as spiral development programs as of September 30, 2003. Section 803 requirements were implemented in DOD Instruction 5000.2, which was effective in May 2003. DOD anticipates that there will be approved spiral development programs to report in 2004. Section 804 of the Defense Authorization Act for Fiscal Year 2002 required DOD to report on the maturity of technology at the initiation of major defense acquisition programs. The act directed DOD to report by March 1 of each year between 2003 and 2006 on a requirement in DOD’s policy that technology must have been demonstrated in a relevant environment (or, preferably, in an operational environment) to be considered mature enough to use for product development in systems integration. Each report is required to (1) identify any major acquisition program that entered system development and demonstration during the preceding calendar year with immature key technology that was not demonstrated in, at minimum, a relevant environment, as required by the new policy; (2) justify the incorporation of any key technology on an acquisition program that does not meet that requirement; (3) and identify any instances that the Deputy Under Secretary of Defense for Science and Technology did not concur and explain how the issue has been or will be resolved, including information on the use of independent readiness assessments. Section 822 of the Defense Authorization Act for Fiscal Year 2003 amended section 804 by adding a requirement that the Secretary of Defense identify each case in which an authoritative decision has been made within DOD not to conduct an independent technology readiness assessment for a critical technology on a major defense acquisition program and explain the reasons for the decision. On March 18, 2003, DOD submitted its first report. Table 6 shows the specific requirements for the report and DOD’s response. Best Practices: Setting Requirements Differently Could Reduce Weapon Systems’ Total Ownership Costs. GAO-03-57. Washington, D.C.: February 11, 2003. Best Practices: Capturing Design and Manufacturing Knowledge Early Improves Acquisition Outcomes. GAO-02-701. Washington, D.C.: July 15, 2002. Defense Acquisitions: DOD Faces Challenges in Implementing Best Practices. GAO-02-469T. Washington, D.C.: February 27, 2002. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001. Best Practices: A More Constructive Test Approach Is Key to Better Weapon System Outcomes. GAO/NSIAD-00-199. Washington, D.C.: July 31, 2000. Defense Acquisition: Employing Best Practices Can Shape Better Weapon System Decisions. GAO/T-NSIAD-00-137. Washington, D.C.: April 26, 2000. Best Practices: DOD Training Can Do More to Help Weapon System Program Implement Best Practices. GAO/NSIAD-99-206. Washington, D.C.: August 16, 1999. Best Practices: Better Management of Technology Development Can Improve Weapon System Outcomes. GAO/NSIAD-99-162. Washington, D.C.: July 30, 1999. Defense Acquisitions: Best Commercial Practices Can Improve Program Outcomes. GAO/T-NSIAD-99-116. Washington, D.C.: March 17, 1999. Defense Acquisition: Improved Program Outcomes Are Possible. GAO/T-NSIAD-98-123. Washington, D.C.: March 17, 1998. Best Practices: DOD Can Help Suppliers Contribute More to Weapon System Programs. GAO/NSIAD-98-87. Washington, D.C.: March 17, 1998. Best Practices: Successful Application to Weapon Acquisition Requires Changes in DOD’s Environment. GAO/NSIAD-98-56. Washington, D.C.: February 24, 1998. Best Practices: Commercial Quality Assurance Practices Offer Improvements for DOD. GAO/NSIAD-96-162. Washington, D.C.: August 26, 1996.
The Department of Defense's (DOD) investment in new weapon systems is expected to exceed $1 trillion from fiscal years 2003 to 2009. To reduce the risk of cost and schedule overruns, DOD revamped its acquisition policy in May 2003. The policy provides detailed guidance on how weapon systems acquisitions should be managed. The Senate report accompanying the National Defense Authorization Act for Fiscal Year 2004 required GAO to determine whether DOD's policy supports knowledge-based, evolutionary acquisitions and whether the policy provides the necessary controls for DOD to ensure successful outcomes, such as meeting cost and schedule goals. The report also required GAO to assess whether the policy is responsive to certain requirements in the Bob Stump National Defense Authorization Act for Fiscal Year 2003 concerning DOD's management of the acquisition process. DOD's new policy supports knowledge-based, evolutionary acquisitions by adopting lessons learned from successful commercial companies. One of those lessons is a knowledge-based approach, which requires program managers to attain the right knowledge at critical junctures--also known as knowledge points--so they can make informed investment decisions throughout the acquisition process. The policy also embraces an evolutionary or phased development approach, which sets up a more manageable environment for attaining knowledge. The customer may not get the ultimate capability right away, but the product is available sooner and at a lower cost. Leading firms have used these approaches--which form the backbone of what GAO calls the best practices model--to determine whether a project can be accomplished with the time and money available and to reduce risks before moving a product to the next stage of development. By adopting best practices in the acquisition policy, DOD's leadership has taken a significant step forward. The next step is to provide the necessary controls to ensure a knowledge-based, evolutionary approach. Implementing the necessary controls at all three knowledge points along the acquisition process helps decision makers ensure a knowledge-based approach is followed. Without controls in the form of measurable criteria that decision makers must consider, DOD runs the risk of making decisions based on overly optimistic assumptions. Each successive knowledge point builds on the preceding one, and having clearly established controls helps decision makers gauge progress in meeting goals and ensuring successful outcomes. DOD was responsive to the requirements in the Bob Stump National Defense Authorization Act for Fiscal Year 2003. DOD's responses reflected the committee's specific concerns about the application of certain statutory and regulatory requirements to the new evolutionary acquisition process, for more guidance for implementing spiral development, and about technology readiness at program initiations.
You are an expert at summarizing long articles. Proceed to summarize the following text: Tax expenditures are preferential provisions in the tax code, such as exemptions and exclusions from taxation, deductions, credits, deferral of tax liability, and preferential tax rates that result in forgone revenue for the federal government. The revenue that the government forgoes is viewed by many analysts as spending channeled through the tax system. However, tax expenditures and their relative contributions toward achieving federal missions and goals are often less visible than spending programs, which are subject to more systematic review. Many tax expenditures—similar to mandatory spending programs—are governed by eligibility rules and formulas that provide benefits to all those who are eligible and wish to participate. Tax expenditures do not compete overtly with other priorities in the annual budget, and spending embedded in the tax code is effectively funded before discretionary spending is considered. Tax expenditures generally are not subject to congressional reauthorization and, therefore, lack the opportunity for regular review of their effectiveness. Some We have long recommended greater scrutiny of tax expenditures.tax expenditures may be ineffective at achieving their social or economic purposes, and information about their performance as well as periodic evaluations can help policymakers make more informed decisions about resource allocation and the most effective or least costly methods to deliver federal support. Performance measurement is the ongoing monitoring and reporting that focuses on whether programs have achieved objectives in terms of the types and levels of activities or outcomes of those activities. Program evaluations typically examine a broader range of information on program performance and its context than is feasible to monitor on an ongoing basis. A “program” may be any activity, project, function, or policy that has an identifiable purpose or set of objectives, including tax expenditures. In the context of community development programs, impact evaluations can be a useful tool to assess the net effect of a program by comparing program outcomes with an estimate of what would have happened in the absence of the program. This form of evaluation is employed when external factors are known to influence the program’s outcomes, in order to isolate the program’s contribution to achievement of its objectives. Importantly, challenges in performance measurement and evaluation are not unique to tax expenditures as agencies have encountered difficulties in measuring the performance of spending programs as well. Pub. L. No. 111-352, 124 Stat. 3866 (2011). GPRAMA amends the Government Performance and Results Act of 1993, Pub. L. No. 103-62, 107 Stat. 285 (1993). choices in setting priorities as government policymakers address the rapidly building fiscal pressures facing our national government. For fiscal year 2010, we identified 23 tax expenditures that fund community development activities. Appendix II lists each tax expenditure with information on its estimated cost, type, and taxpayer group, as well as enactment and expiration dates. Five tax expenditures primarily promote community development in economically distressed areas, including Indian reservations; these programs cost the federal government approximately $1.5 billion in fiscal year 2010. Nine tax expenditures both support community development and address other federal mission areas, such as rehabilitating historic or environmentally contaminated properties for business use as well as constructing a range of transportation facilities, such as airports and docks, and water and hazardous waste systems. These multipurpose tax expenditures cost the federal government approximately $8.7 billion in fiscal year 2010. Two large state and local bond tax expenditures also may support community development, although community development activities account for only a portion of the total costs of those tax expenditures. Finally, the federal government has periodically offered temporary tax relief following certain disasters, including six packages of tax provisions focused on specific areas as well as one provision available for any presidentially declared disaster area. Figure 1 illustrates the mix of various tax expenditures that support community development. The federal government has five tax expenditures primarily to promote community development in economically distressed areas, such as low- income communities and Indian reservations. As noted below, all but one of these programs have expired. The Empowerment Zones and Renewal Communities (EZ/RC) programs ($730 million in revenue losses in fiscal year 2010) were established to reduce unemployment and generate growth in economically distressed communities that were designated through a competitive process. Initially, the EZ program offered a mix of grants and tax incentives for community and economic development, but later EZ rounds and the RC program offered primarily tax incentives for business development. While eligibility varied slightly by program and round, the 40 EZ- and 40 RC-designated communities were selected largely on the basis of poverty and unemployment rates, population, and other area statistics based on Decennial Census data.tax provisions expired at the end of 2011. The RC tax provisions expired at the end of 2009, and the EZ The New Markets Tax Credit (NMTC) ($720 million in revenue losses in fiscal year 2010) encourages investment in impoverished, low- income communities that traditionally lack access to capital. Whereas the EZ/RC programs target designated communities, the NMTC targets Census tracts where the poverty rate is at least 20 percent or where median family incomes do not exceed 80 percent of such incomes within a state or a metropolitan area. In January 2010, we reported that 39 percent of the Census tracts qualified for the NMTC program and 36 percent of the U.S. population lived in these Census tracts. The NMTC expired at the end of 2011. Two tax expenditures—Tribal Economic Development Bonds and Indian employment credit—target Indian tribal reservations. Indian tribes are among the most economically distressed groups in the United States, and tribal reservations often lack basic infrastructure commonly found in other American communities, such as water and sewer systems as well as telecommunications lines. Created under the American Recovery and Reinvestment Act of 2009 (the Recovery Act), the temporary bond authority ($10 million in revenue losses in fiscal year 2010) provided tribal governments with greater flexibility to use tax-exempt bonds to finance economic development projects. The $2 billion bond authority was to be allocated by February 2010, but Treasury and IRS have extended deadlines to reallocate unused bond authority. The Indian employment credit expired at the end of 2011. The Recovery Act also created temporary Recovery Zone bonds— including Recovery Zone Economic Development Bonds and Recovery Zone Facility Bonds allocated among the states and counties and large municipalities within the states based on unemployment losses in 2008. These bond authorities ($60 million in outlays in fiscal year 2010) expired at the end of 2010. Four of the five community development tax expenditures targeted to economically distressed areas have a statutory limit, such as a specified number of community designations, volume cap, or allocation amount, as shown in table 1. Although the allocation processes varied, these tax expenditures resemble grants in that an agency—either a federal agency or a state or local government—selects the qualifying communities, community development entities (CDE), or projects to receive the limited allocation available. For the EZ/RC program, communities nominated by their state and local governments had to submit a strategic plan showing how they would meet key EZ program principles or a written “course of action” with commitments to carry out specific legislatively mandated RC activities. In selecting the designated communities, HUD and USDA were required to rank EZ nominees based on the effectiveness of their plans, but HUD was required to designate RCs based in part on poverty, unemployment, and, in urban areas, income statistics. For designated EZs and RCs, state and local governments were responsible for allocating certain tax provisions with specified limits, including the RC Commercial Revitalization Deduction and EZ Facility bonds. For the NMTC program, the annual tax credit allocation limit was $3.5 billion for fiscal years 2010 and 2011. The CDFI Fund awards tax credit allocations to winning CDE applicants based on application scoring by peer review panels. The CDEs, in turn, invest in qualified low-income community investments. As of November 1, 2011, the CDFI Fund had allocated $29.5 billion in NMTC authority available from 2001 to 2010 and announced $3.6 billion in 2011 tax credit allocations on February 23, 2012. For more on the selection process, see GAO, Community Development: Federal Revitalization Programs Are Being Implemented, but Data on the Use of Tax Benefits Are Limited, GAO-04-306 (Washington, D.C.: Mar. 5, 2004). For the Recovery Zone bond programs, the national volume cap was $10 billion for Recovery Zone Economic Development Bonds and $15 billion for Recovery Zone Facility Bonds. State and local governments were responsible for allocating bond issuance authority to specific projects. Tribal Economic Development Bonds had a national volume cap of $2 billion. Tribal governments applied for allocations to issue bonds for specific projects. Other tax expenditures available in economically distressed communities are comparable to entitlement programs for which spending is determined by statutory rules for eligibility, benefit formulas, and other parameters rather than by Congress appropriating specific dollar amounts each year.taxes) available to all qualified claimants, regardless of how many taxpayers claim the tax expenditures, how much they claim collectively, or how much federal revenue is reduced by these claims. For example, businesses may claim Indian employment tax credits for employing Indian tribal members and their spouses without limit on the numbers or total Such tax expenditures typically make funds (through reduced amounts of claims. Similarly, businesses located in EZs and RCs may claim the EZ/RC Employment Credit and the Work Opportunity Tax Credit for employing eligible residents within an EZ or RC area without an aggregate limit on such tax credits. The term "brownfield site" means real property, the expansion, redevelopment, or reuse of which may be complicated by the presence or potential presence of a hazardous substance, pollutant, or contaminant. Both tax credits cannot be claimed for a single rehabilitation project. Eligible expenditures include costs incurred for rehabilitation and reconstruction of certain older buildings. Rehabilitation includes renovation, restoration, and reconstruction and does not include expansion or new construction. end of 2009, and the expensing of environmental remediation costs expired at the end of 2011. Two tax expenditures fund production of affordable rental housing for low-income households—the Low-Income Housing Tax Credit (LIHTC) and tax-exempt rental housing bonds. Under the LIHTC, a 9 percent tax credit is available for new construction or substantial rehabilitation projects not otherwise subsidized by the federal government, and a 4 percent tax credit is available for the projects receiving other federal subsidies including rental bond financing. Affordable housing projects must satisfy one of two income-targeting requirements: 40 percent or more of the units must be occupied by households whose incomes are 60 percent or less of the area median gross income, or 20 percent or more of the units are occupied by households whose incomes are 50 percent or less of the area median gross income. For fiscal year 2010, two grant programs also helped provide gap financing for LIHTC housing development following disruption of the tax credit market in 2008. Federally tax-exempt and tax credit bonds issued by state and local governments also contribute to community development and other federal mission areas by financing infrastructure improvements and other projects. For example, state and local governments may issue private activity bonds to finance airports, docks, and other transportation infrastructure; large business projects tied to the employment of residents in Empowerment Zones; and water or wastewater facilities that enable communities to meet community facilities needs and support development. Qualified Zone Academy Bonds (QZAB)—the authority for which expired at the end of 2011—may be used for renovating school facilities, purchasing equipment, developing course materials, or training personnel at qualified public schools in economically distressed areas including designated EZs or RCs. Whereas private activity bonds are used to support specific private activities and facilities often intended to generate economic development, state and local governments may also issue tax-exempt public-purpose state and local bonds and Build America Bonds (BAB) to help finance public infrastructure and facilities. In 2008, we reported that a majority of state and local bonds issued in 2006 were allocated for education or general purposes; for the latter category, it was not clear what activities or facilities were funded by the bonds.that community development activities comprise only a portion of governmental bonds, we did not sum the revenue losses for the two Given general bond provisions to avoid overstating federal support for community development. As shown in table 2, all of the multipurpose community development tax expenditures involve other entities in addition to IRS in administering the tax benefits. Five multipurpose tax expenditures resemble grants in that state and local governments oversee the allocation process to select qualifying projects to receive the limited allocation available. For the LIHTC for example, state housing finance agencies (HFA) award 9 percent credits to developers for low-income housing projects based on each state’s qualified allocation plan, which generally establishes a state’s funding priorities and selection criteria. Although the federal government does not set specific limits for general-purpose state and local bonds and BABs, private activity bond financing—including for rental housing and water systems—is generally subject to an annual volume cap for each state, and QZABs and bond financing for certain transportation facilities also have statutory allocation limits. The rehabilitation and brownfields tax expenditures resemble entitlement programs in that these tax incentives have no allocation limits and are available to all eligible claimants. In addition to IRS’s role in administering tax law, other federal and state agencies play a role in certifying that the properties are eligible for tax benefits. For the 20 percent rehabilitation tax credit for certified historic structures, the NPS, with the assistance of State Historic Preservation Offices, certifies historic structures, approves rehabilitation applications, and confirms that completed rehabilitation projects meet the Secretary of Interior’s Standards of Rehabilitation. For the brownfields tax expenditures, state environmental agencies certify eligible properties. The federal government has offered various mixes of temporary tax incentives and special rules to stimulate business recovery and provide relief to individuals after certain major disasters. See appendix VI for a detailed list of 45 tax benefits made available for specific disaster areas. Business recovery is a key element of a community’s recovery after a major disaster. To assist New York in recovering from the September 11, 2001, terrorist attacks, Congress passed a 2002 package with seven tax benefits targeted to the Liberty Zone in lower Manhattan. In the aftermath of the 2005 Gulf Coast hurricanes, Congress enacted the Gulf Opportunity Zone Act of 2005 (GO Zone Act) offering 33 tax benefits in part to promote business recovery and provide debt relief for states. A 2007 Kansas disaster relief package provided 13 tax benefits for 24 counties in Kansas affected by storms and tornadoes that began on May 4, 2007. A 2008 midwest disaster relief package targeted 26 tax benefits for selected counties in 10 states affected by tornadoes, severe storms, and flooding from May 20 through July 31, 2008. Also in 2008, Congress enacted a package offering eight tax benefits available to any individual or business located in any presidentially declared disaster area during calendar years 2008 and 2009. The preponderance of the disaster tax incentives offered in the six legislative packages we examined were modifications of existing tax expenditures, including increased allocations for the NMTC, LIHTC, rehabilitation tax credits, and tax-exempt bond financing. Several tax packages have offered accelerated first-year depreciation allowing businesses to more quickly deduct costs of qualified property, as well as partial expensing for qualified disaster cleanup and environmental remediation costs. Other tax incentives available for individuals in disaster areas included increased tax credits for higher education expenses and relief from the additional 10 percent tax on early withdrawals of retirement funds. An eligible disaster area may encompass communities that were economically distressed before the disaster as well as other communities, and taxpayers in the qualified area may be eligible for some tax incentives even if they did not necessarily sustain losses in the disaster. For those disaster tax incentives available to individuals and businesses as long as they meet specified federal requirements, the full cost to the federal government depends on how many taxpayers claim the provisions on their tax returns. For community development, tax expenditures are not necessarily an either/or alternative, and they may be combined to support certain community development activities. The design of each community development tax expenditure we reviewed appears to overlap with that of at least one other tax expenditure, as the following examples illustrate. Five tax expenditures targeted similar geography—economically distressed areas including tribal areas—although the specific areas served varied. Within the EZ- and RC-designated communities, a variety of tax incentives were available to help reduce unemployment and stimulate business activity. Seven bond tax expenditures share a common goal to finance infrastructure development.necessarily duplicative in that they allow flexibility in tax-exempt bond financing for similar projects with different ownership characteristics. For example, water and sewer facilities can be financed through public-purpose governmental bonds if a governmental entity is the owner and operator or through private activity bonds if the owner and operator is a private business. The various bond authorities are not Multiple tax expenditures—including the NMTC, several EZ/RC incentives, as well as the rehabilitation and brownfields tax expenditures—can be used to fund commercial buildings. Within this broad area of overlap, the tax expenditures are not necessarily duplicative in that some target certain types of buildings. The various tax expenditures that can be used to fund commercial buildings have geographic or other targets that sometimes coincide and sometimes do not. Therefore, for example, the 20 percent rehabilitation tax credit targets certified historic structures and the 10 percent rehabilitation credit is available for other older structures, but these eligible structures may or may not fall within the low-income communities eligible for NMTC assistance. Various tax benefits made available for certain disaster areas were largely modifications of existing tax expenditures. The community development tax expenditures we reviewed also may potentially overlap with federal spending programs. As discussed above, our May 2011 report identified overlap among 80 economic development spending programs administered by four agencies—Commerce, HUD, SBA, and USDA.economic development spending programs that are similar to the areas of community development tax expenditure overlap discussed above. Appendix VII discusses areas of overlap among the Disaster tax aid may also potentially overlap with federal financial assistance offered through disaster assistance grants and loans. Areas of overlap with multiple tax expenditures funding the same community development project may not represent unnecessary duplication, in part, because some tax expenditures are designed to be used in combination. As an example, the 4 percent LIHTC is designed to be used in combination with rental housing bonds. In another example, the 20 percent historic preservation tax credit may be used in combination with other community development tax expenditures, including the NMTC and LIHTC. Under the Housing and Economic Recovery Act of 2008, state HFAs are allowed to consider historic preservation as a selection factor in their qualified allocation plans to promote redeveloping historic structures as affordable housing. As shown in table 3, federal tax laws and regulations impose limits on how community development tax expenditures can be combined with each other and spending programs to fund the same individual or project. For example, employers cannot double dip by claiming two employment tax credits for the same wages paid to an individual. Whereas business investors may claim accelerated depreciation for LIHTC and NMTC projects, businesses generally may not claim accelerated depreciation for For the rehabilitation private facilities financed with tax-preferred bonds.tax credits and brownfield tax incentives, taxpayers may not claim costs funded by federal or state grants. Also, rehabilitation costs claimed for the 20 percent credit cannot be counted towards the adjusted basis of a property for the purposes of calculating the amount of other federal tax credits claimed for the same project; as a result, the effective tax savings on using the 20 percent credit with other federal tax credits are less than the sum of tax savings provided by each of the credits and deductions if they could be used together without this restriction. The information on tax law and regulatory limits listed in table 3 is not exhaustive; additional limits may apply in other federal laws and regulations. An area of potential overlap also exists among the tax expenditures subsidizing community development activities and CRA regulatory requirements for depository institutions in helping to meet the credit needs of the communities in which they operate. Banks earn positive consideration toward their CRA regulatory ratings by investing in projects also receiving certain tax benefits. In 2007, we reported that investors used NMTC and LIHTC to meet their CRA requirements. At that time, over 40 percent of NMTC investors reported that they used the tax credit to remain compliant with CRA. NMTC investors using the tax credit to meet CRA requirements also viewed it as very or somewhat important in their decision to make the investment. Nearly half of NMTC investors we surveyed in 2007 reported that they made other investments eligible for LIHTC, and nearly three-quarters of those investors using both tax credits were also required to comply with the CRA. Federal community development financing is fragmented with multiple federal agencies administering related spending programs as well as with multiple federal, state, and local agencies helping administer certain tax expenditures. As we have previously reported, mission fragmentation and program overlap may sometimes be necessary when the resources and expertise of more than one agency are required to address a For example, IRS, NPS, and state historic complex public need.preservation offices are involved in administering the 20 percent historic preservation tax credit for rehabilitating historic structures. NPS oversees compliance with technical standards for historic preservation, and IRS oversees financial aspects of the tax credit. NPS and IRS have partnered with IRS providing guidance including frequently asked questions about the tax credit on the NPS website. At the same time, fragmentation can sometimes result in administrative burdens, duplication of efforts, and inefficient use of resources. Applicants may need to apply for tax expenditures and spending programs at multiple agencies to address the needs of a distressed area or finance a specific project. For example, owners and developers seeking to restore an historic structure for use as affordable rental housing would need to apply separately to NPS for the 20 percent historic rehabilitation credit as well as to the state HFA for a LIHTC allocation. Achieving results for the nation increasingly requires that federal agencies work together to identify ways to deliver results more efficiently and in a way that is consistent with limited budgetary resources. Agencies and programs working collaboratively can often achieve more public value than when they work in isolation. To address the potential for overlap and fragmentation among federal programs, we have previously identified collaborative practices agencies should consider implementing in order to maximize the performance and results of federal programs that share common outcomes. These practices include defining common outcomes; agreeing on roles and responsibilities for collaborative efforts; establishing compatible policies and procedures; and developing mechanisms to monitor, assess, and report on performance results. GAO-11-318SP. the extent possible, data sharing is a way to reduce collection costs and paperwork burdens imposed on the public. In general, IRS only collects information necessary for tax administration or for other purposes required by law. As a result, IRS does not collect basic information about the numbers of taxpayers using some community development tax expenditures. We have consistently reported that IRS does not have data on the use of various expensing and special depreciation incentives available to encourage investment in EZ/RC communities, tribal reservations, and disaster areas. For tax credits, IRS has data on the numbers of taxpayers and aggregate amounts claimed, but data often do not tie use of the tax credits to specific communities. Location information is critical to identifying the community where an incentive is used and determining the effect of the tax benefit on local economic development. For bonds, IRS collects data on the amount of bonds issued and broad purpose categories for governmental bonds and allowable uses for qualified private activity bonds. As we reported in 2008, while the information collected is useful for presenting summary information, it provides only a broad picture of the facilities and activities for which the bonds are used.sufficient for IRS to administer the tax code, it provides little information for use in measuring performance. As a result, information often has not been available to help Congress determine the effectiveness of some tax expenditures or even identify the numbers of taxpayers using some provisions. Table 4 summarizes the types of information, including limitations and potential gaps, IRS collects for different types of community development tax expenditures. Our systematic review of literature for select community development tax expenditures generally found few studies that attempted to assess the effectiveness of programs in promoting certain measures of community development, such as reducing poverty or unemployment rates. We reviewed government studies and academic literature on the following community development tax expenditures: the NMTC, EZ tax program, disaster relief tax provisions, and the rehabilitation tax credits. In reviewing this literature, we focused on studies that attempted to analyze the impact of the tax expenditures on community development through empirical methods. We also summarized our prior observations and recommendations on options to improve tax expenditure design and considerations in authorizing similar community development tax programs. For the NMTC, we did not identify any empirical studies issued since our last report in January 2010. For the EZ program, we identified several studies published since our most recent report in March 2010 that attempted to measure the effect of the program on some measure of community development, as described below. We identified one study on the rehabilitation tax credits that attempted to measure one aspect of community development. We did not identify any empirical studies on disaster tax relief provisions. The scarcity of literature on some tax expenditures may be due to the fact that establishing that a community development tax expenditure or spending program has causal impact on economic growth in a specific community can be challenging. Table 6 below summarizes key methodological issues in attempting to measure effectiveness of the tax expenditures we selected. As we reported in 2010, making definitive assessments about the extent to which benefits flow to targeted communities as a direct result of NMTC investments presented challenges. For example, the small size of the NMTC projects relative to the total economic activity within an area made it difficult to detect the separate effect of a particular project. Many of the eligible communities may already have significant business activities that could mask NMTC impacts. Limitations associated with available data also made it difficult to determine whether benefits generated in a low- income community outside the scope of a particular project are the direct result of the NMTC program. As discussed above, CDFI Fund is collecting additional data on the use of the NMTC that may provide further insights into its use and impact on communities. For example, CDFI Fund is now collecting data on the amount of equity that CDEs estimate will be left in the businesses at the end of the 7-year period in which tax credits can be claimed. Collecting this information may provide CDFI Fund with additional information on the credit’s cost-effectiveness. Our 2007 NMTC report used statistical methods to attempt to measure the credit’s effectiveness, but determined that further analysis is needed to determine whether the economic costs of shifting investment are justified. Our analysis did find that the credit may be increasing investment in low-income communities, although this finding was not, in and of itself, sufficient to determine that the credit was effective. Increased investment in low-income communities can occur when NMTC investors increase their total funds available for investment or when they shift funds from other uses. A complete evaluation of the program’s effectiveness would require determining the costs of the program, including any behavioral changes by taxpayers that may be introduced by shifted investment funds. Neither our statistical analysis nor the results of a survey we administered allowed us to determine definitively whether shifted investment funds came from higher-income communities or from other low-income community investments. The related entities test requires that the CDE have no more than a 50 percent ownership stake in a qualified low-income community business. program which expired at the end of 2011.should require Treasury’s CDFI Fund to gather data to assess whether and to what extent the grant program increases the amount of federal subsidy provided to low-income community businesses compared to the NMTC; how costs for administering the program incurred by the CDFI Fund; CDEs, and investors would change; and whether the grant program otherwise affects the success of efforts to assist low-income communities. If it does so, Congress We did not identify any empirical studies on the effectiveness of the NMTC since our last report, but CDFI Fund has contracted with the Urban Institute for an evaluation of the NMTC that may lead to additional insights into the program’s effectiveness. In 2010, the Urban Institute published a literature review to inform a forthcoming evaluation, including challenges inherent in evaluating economic and community development CDFI Fund reports that the Urban Institute is programs in general. primarily relying on surveys to CDEs and businesses to conduct the evaluation. The Urban Institute conducted a preliminary briefing on the study's results with CDFI Fund in January 2012. After submitting a draft report to CDFI Fund, the Urban Institute will issue a final report in the spring 2012. Martin D. Abravanel, Nancy M. Pindus, Brett Theodus, Evaluating Community and Economic Development Programs: A Literature Review to Inform Evaluation of the New Markets Tax Credit Program, The Urban Institute, September 2010. Our prior work has found improvements in certain measures of community development in EZ communities, but data and methodological challenges make it difficult to establish causal links. Our 2006 report found that Round 1 EZs that received a combination of grant and tax benefits did show improvements in poverty and unemployment, but we did not find a definitive connection between these changes and the EZ program. Our 2010 report on the EZ/RC program reviewed seven academic studies of Round 1 projects and found that the evaluations used different methods and reported varying results with regard to poverty and unemployment. For example, one study concluded that the program reduces poverty and unemployment, while another study found that the program did not improve those measures of community development. As with the NMTC, our prior EZ/RC work has demonstrated challenges in measuring the effects of the program. For example, data limitations make it difficult to thoroughly evaluate the program’s effectiveness in that use of the EZ/RC Employment Credit cannot be tied to specific communities. Demonstrating what would have happened in the absence of the credit is difficult. External factors, such as national and local economic trends, can make it difficult to isolate the effects of the EZ/RC tax incentives. Since our 2010 EZ/RC report, we noted that more recent studies comparing employment, housing values, and poverty rates in EZ communities with similarly economically distressed areas have yielded mixed results. Two studies have found lower unemployment in the designated areas where the provisions have been used relative to similar non-EZ areas. Specifically, one study reviewed federal and state enterprise zones and found positive impacts on local labor markets in terms of the unemployment rate and poverty rate. In addition, the researchers found positive, but statistically insignificant, spillover effects to neighboring Census tracts. The second study focused on Round 1 of the EZ program and found that the EZ designation substantially increased employment in zone neighborhoods, particularly for zone residents. Importantly, the researchers examined Round 1 of the program that relied on a mix of tax benefits and grant funding. In addition, another study found that EZ program results seem to vary among different types of businesses within the designated zones. For example, researchers found that EZ tax incentives increase the share of retail and service sector establishments but decreases the share of transportation, finance, and real estate industries. They noted that the effectiveness of the EZ wage credit may be affected by the types of industries that are located in the designated area. However, while these studies have found that certain economic outcomes are associated with an area being eligible for EZ incentives, due to data limitations the studies cannot estimate the extent to which these outcomes vary with the amount of incentives actually used in an area. Both JCT and the Congressional Research Service (CRS) conducted literature reviews and reported modest effects and methodological limitations in making any definite assessments on the effectiveness of EZs. JCT reported that studies generally found modest effects overall with relatively high costs. In addition, it is difficult to determine whether the spending or tax incentives were responsible for any increases in economic activity. CRS’s review of academic literature found modest, if any, effects of the program and called into the question their cost- effectiveness. According to CRS, one persistent issue in evaluating the potential impact of EZs is the inherent difficulty of identifying the effect of the programs apart from overall economic conditions. With the expiration of the RCs at the end of 2009 and EZs at the end of 2011, we have made observations in prior work that Congress can consider if these or similar programs are authorized in the future. Without adequate data on the use of program grant funds or tax benefits, neither the responsible federal agencies nor we could determine whether the EZ/EC funds had been spent effectively or that the tax benefits had in fact been used as intended. If Congress authorizes similar programs that rely heavily on tax benefits in the future, it would be prudent for federal agencies responsible for administering the programs to collect information necessary for determining whether the tax benefits are effective in achieving program goals. In 2010, the U.S. Census Bureau began releasing more frequent poverty and employment updates at the Census tract level than it has traditionally provided. This information could be a useful tool in determining the effects of such programs on poverty and employment in designated Census tracts. Though we identified literature that discussed use of disaster tax provisions and their design, none of the articles attempted to measure empirically the impact the incentives had on promoting community development. A potential challenge in designing tax relief for disaster areas is that those communities within the zones most affected by the disaster may be slower to respond to the incentives than other areas within the zone. Our prior work on the GO Zone reported that bonds were awarded on a first-come, first-served basis that led to awarding bond allocation to projects in less damaged areas in the zone because businesses in these areas were ready to apply for and issue bonds before businesses in more damaged areas could make use of the incentive. Thus, assessing the impact of disaster relief on an entire zone may not reflect how the provisions affected specific areas within the zone. Another key challenge in evaluating disaster relief tax expenditures is the difficulty in establishing a comparison area where a “comparable” disaster has taken place but government programs or tax provisions were not available. Moreover, evaluations of disaster relief tax expenditures may be difficult because IRS collects limited information on the use of temporary disaster aid, as discussed above. While we identified numerous articles focused on historic restoration funded with the federal rehabilitation tax credits and the potential benefits of historic preservation in adapting currently vacant or underused property, we identified only one study that attempted to empirically measure the impact of the tax credit on community development. The study analyzed rehabilitation investment in the Boston office building market between 1978 and 1991 and found that the percentage of investment spending that would have occurred without the tax credit varied over time from about 60 to 90 percent. Another study we reviewed used economic modeling to quantify some community development outputs associated with the 20 percent rehabilitation tax credit, such as estimated jobs and projected income data.the study did not assess whether a rehabilitation project would have occurred in the absence of the credit nor did it compare community development in a project community with development in similar communities. As we previously reported, a complete evaluation of a credit’s effectiveness also requires determining the costs of the program and an assessment of the program’s economic and social benefits. A challenge in attempting to evaluate how the rehabilitation tax credits affect measures of community development is that the credits have a dual purpose and are not solely intended to promote community development. Evaluators may have difficulty reviewing the program’s effectiveness because they lack specific data on the geographic locations of the projects. In addition, the small size of the rehabilitation tax credit projects relative to the total activity in the area’s economy makes it difficult to isolate the economic effects of the credit. The annual federal commitment to community development is substantial, with revenue losses from community development-related tax expenditures alone totaling many billions of dollars. However, all too often even basic information is not available about who claims tax benefits from community development tax expenditures and which communities benefit from the activities supported by the tax expenditures. Further, relatively few evaluations of the effectiveness of community development tax expenditures have been done and when they have been done, results have often been mixed about their effects. These issues are familiar and long-standing for tax expenditures generally. We have made recommendations to OMB in 1994 and 2005 to move the Executive Branch forward in obtaining and using information to evaluate tax expenditures’ performance, which can help in comparing their performance to that of related federal efforts. GPRAMA offers a new opportunity to make progress on these issues. For those limited areas where OMB sets long-term, outcome-oriented, crosscutting priority goals for the federal government, a more coordinated and focused effort should ensue to identify, collect, and use the information needed to assess how well the government is achieving the goals and how those efforts can be improved. We look forward to progress in achieving GPRAMA’s vision for a more robust basis for judging how well the government is achieving its priority goals. The Administration’s interim crosscutting policy goals include some that identify tax expenditures among the contributing programs and activities. OMB’s forthcoming guidance should be helpful in further drawing tax expenditures into the GPRAMA crosscutting performance framework. Clearly, community development is but one of many areas where OMB could choose to set priority goals, and the interim goals to date encompass 1 of the 23 tax expenditures we reviewed. In this regard, Congress has a continuing opportunity to express its priorities about the goals that should be selected, including whether community development should be among the next cycle of goals. Whether or not OMB selects community development as a priority goal area, Congress also has the opportunity to urge more evaluation and focus Executive Branch efforts on addressing community development performance issues through oversight activities, such as hearings and formal and informal meetings with agency officials. Given the overlap and fragmentation across community development tax and spending programs, coordinated congressional efforts, such as joint hearings, may facilitate crosscutting reviews and ensure Executive Branch efforts are mutually reinforcing. While GPRAMA provides a powerful opportunity to review how tax expenditures contribute to crosscutting goals, progress is likely to be incremental and require sustained focus. Evaluating the impact of community development efforts is inherently difficult and definitive performance conclusions often cannot be drawn. Data limitations are not easy or inexpensive to overcome, and resources to evaluate programs must compete with other priorities even as the federal government copes with significant fiscal challenges. Thus, judicious choices will need to be made as efforts to improve tax expenditure performance information available to policymakers continue. Congress may wish to use GPRAMA’s consultation process to provide guidance on whether community development should be among OMB’s long-term crosscutting priority goals as well as stress the need for evaluations whether or not community development is on the crosscutting priority list. Congress may also wish to focus attention on addressing community development tax expenditure performance issues through its oversight activities. We provided a draft of this report for review and comment to the Director of OMB, the Secretary of the Treasury, the Commissioner of Internal Revenue, as well as representatives of three federal agencies helping administer certain community development tax expenditures—the Director of the CDFI Fund, the Secretary of Housing and Urban Development (HUD), and the Secretary of the Interior (Interior). The Deputy General Counsel of OMB, the Director of HUD’s Office of Community Renewal, the GAO Audit Liaison of Interior, and the Director of the CDFI Fund provided general comments. The first three provided email comments and the last provided a comment letter which is reprinted in appendix VIII. Only the HUD comments addressed our matters for congressional consideration directly, stating that the report provided minimal justification for them. Although the Secretary of the Treasury and Commissioner of Internal Revenue did not provide written comments, Treasury’s Office of Tax Analysis and IRS’s Office of Legislative Affairs provided technical changes, which we incorporated where appropriate. While not commenting on our matters for congressional consideration, OMB staff reiterated the view that the Administration has made significant progress in addressing tax expenditures. OMB staff cited assorted Fiscal Year 2013 budget proposals which it estimated would save billions of dollars by eliminating certain spending through the tax code and modifying other tax provisions. Some of the budget proposals relate to tax expenditures covered in this report, and we updated the text to reflect the President’s latest proposals. We also updated our report to reflect the release of new interim crosscutting priority goals and that the Administration has identified some tax expenditures that contribute to these goals, as required under GPRAMA. OMB staff said that this is a significant step forward and will be important for broader GPRAMA implementation over 2012 and 2013. We agree that this inclusion of tax expenditures along with related other programs in the GPRAMA goals is an important step toward providing policymakers with the breadth of information needed to understand the full federal effort to accomplish national objectives. Finally, OMB staff expressed concern that we were suggesting that tax expenditures be addressed through a “one size fits all” framework. We do not believe this report or earlier products suggest that assessing the performance of tax expenditures be done in only one way. We have emphasized the need for greater scrutiny of tax expenditures and more transparency over how well they work and how they compare to other related federal programs. In its comments, HUD described the report as substantive and comprehensive in addressing community development tax incentives with accurate information about the EZ/RC tax expenditures and HUD’s role in their administration. However, HUD expressed the view that we had minimal justification for our matters for Congress to consider using the GPRAMA consultation process to express congressional priorities related to community development and to focus attention on community development tax expenditures’ performance through its oversight activities. We disagree. The basic issues we found in this review—the all too often lack of even basic information about tax expenditures’ use and the relative paucity of evaluations of their performance—are among the key issues that could be mitigated through GPRAMA crosscutting goals and Congress’s oversight activities. HUD also said we had skirted the issue of identifying programs with the greatest probability for elimination due to duplication, fragmentation, and overlap. This was not among our review’s objectives and we believe the type of information we present can assist Congress in understanding what information is available to support such decisions. As we have previously reported, agencies engaging Congress in identifying which issues to address and what to measure are critical, and GPRAMA significantly enhances requirements on the consultation process. With the release of the interim crosscutting goals, we believe that Congress has a continuing opportunity to express its priorities regarding community development ahead of the next goal cycle due in February 2014. HUD also noted the expiration of some tax expenditures and sought clarification about their inclusion in the report. Our report includes recently expired tax expenditures and where applicable discusses our prior findings and suggestions for Congress to consider if it wishes to extend the tax expenditures that have expired or create similar new ones. HUD also provided technical and editorial comments which we incorporated as appropriate. In its comments, Interior disagreed with several findings. Interior characterized our report as expressing the view that unwarranted overlap, fragmentation, or duplication existed involving the 20 percent historic rehabilitation credit that Interior’s NPS helps administer. Interior agreed that the tax credit—which has a primary purpose to preserve and rehabilitate historic buildings—has a two-fold mission to also promote community development by revitalizing historic districts and neighborhoods. However, Interior disagreed that the historic rehabilitation tax credit overlaps or duplicates with other community development tax expenditures. Interior stated that only the tax credit has a specific purpose to preserve historic buildings, that the tax credit is not targeted to certain census tracts or low-income areas, and that Congress generally did not exclude historic tax credit users from also using other federal programs. In addition, Interior said that the administration of the historic rehabilitation tax credit was not fragmented, but instead was an example of joint administration that effectively draws upon the best resources of two federal agencies in a coordinated way to implement the law. Finally, Interior disagreed with our finding that limited information is available about the effectiveness of the 20 percent historic rehabilitation tax credit. Our report does not characterize any overlap, fragmentation, or duplication as “unwarranted.” Rather, we provide a factual description based on standard definitions used in many GAO reports of the relationships between the various tax expenditures that have at least a partial purpose of supporting community development. We make the same point that Interior raises as well—that Congress was aware of and often designed rules to govern the interrelationships among these tax expenditures. Accordingly, our report says these interrelationships do not necessarily represent unnecessary duplication. Based on Interior’s comments, however, we further clarified our text to note that one of the differences between the historic rehabilitation credit and the other community development tax expenditures is that the rehabilitation credit targets certain older structures. Regarding Interior’s comment about fragmentation in the credit’s administration, our report describes the roles of IRS and NPS and says fragmentation may sometimes be necessary when the resources and expertise of more than one agency are required, such as in the case of NPS overseeing technical standards for historic preservation. As we reported, however, fragmentation can result in administrative burdens when an applicant needs to apply at multiple agencies to finance a specific project, such as restoring a historic building as low-income housing. Finally, regarding Interior’s comments on the effectiveness of the rehabilitation tax credit, we continue to note that little is known about the effectiveness of the credit as a community development program given that we identified only one empirical analysis of the effect of the tax credit on community development. Interior pointed specifically to reports based on an economic model NPS helped fund. However, as our report states, the modeling reports did not assess what would have happened in the absence of the historic rehabilitation tax credits or compare development in tax credit project communities to similar communities. In its comment letter (reprinted in app. VIII), the CDFI Fund said that it appreciated GAO’s ongoing efforts to improve and strengthen performance measurement and evaluation of community and economic development programs. The CDFI Fund said that it has committed resources to systematically evaluate the impacts of the NMTC program and proposed to develop tools that would have provided standard benchmarking and estimation techniques for measuring outcomes and coordinating reporting for projects with multiple sources of funding. Our literature review for this report drew on a study contracted by the CDFI Fund that provided an overview of the inherent challenges in evaluating community development programs. The literature review will inform a forthcoming independent evaluation of the NMTC to be issued later this spring. The CDFI Fund also provided technical comments which we incorporated as appropriate. The CDFI Fund said that it continued to have strong reservations with our 2010 option for Congress to consider offering grants in lieu of NMTC tax credits if it extends the NMTC program. As stated in our 2010 report and reiterated as a cost saving option in our 2011 duplication report, our analysis suggests that converting the NMTC to a grant program would increase the amount of the equity investment that could be placed in low- income businesses and make the federal subsidy more cost-effective. Our 2010 report addressed both concerns that the CDFI Fund reiterated in its comments on this report. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to the Director of the Office of Management and Budget, the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other key contributors to this report are listed in appendix IX. Our objectives were to (1) identify tax expenditures that promote community development, and areas of potential overlap and interactions among them; (2) assess data and performance measures available and used to assess performance for community development tax expenditures; and (3) determine what previous studies have found about the effectiveness of selected tax expenditures in promoting community development. While both the U.S. Department of the Treasury (Treasury) and the Joint Committee on Taxation (JCT) annually compile a list of tax expenditures and estimates of their cost, the Treasury and JCT lists differ somewhat in terms of what is listed as a tax expenditure and how many specific provisions may be combined in a listed tax expenditure. Our count of community development tax expenditures is based on the Treasury and JCT published tax expenditure lists, detailed below. Where a single tax expenditure listing encompasses more than one tax code provision, we separately describe those provisions to provide a more detailed perspective of the mix of tax assistance available for community development. Federal agencies do not have a standard definition of what constitutes community or economic development. To identify community development tax expenditures, we developed a list of community development activities based on various federal sources and compared these activities to the authorized uses of tax expenditures. As a starting point for developing the list of activities, we used the definition of the community and regional development budget function and its three subfunctions—urban community development, rural and regional development, and disaster relief and insurance.JCT list tax expenditures by budget function. We also used descriptions of spending programs under the community and regional development budget function as detailed in the 2010 Catalog of Federal Domestic Assistance (CFDA). We further reviewed descriptions of allowable uses under the Community Development Block Grant (CDBG)—the largest single spending program in the budget function. Finally, we reviewed the community development definition for the Community Reinvestment Act (CRA) and identified certain tax expenditures that banks can use in meeting CRA community investment tests. We included tax expenditures targeted to certain geographies, such as low-income areas or designated disaster areas, or specific populations, such as Native Americans. Table 7 summarizes the definition of community development for purposes of this report. We compiled a preliminary list of tax expenditures for fiscal year 2010 listed under community and regional development budget function by Treasury and JCT. Our universe included expired tax expenditures listed by either Treasury or JCT which had estimated revenue losses or outlays in fiscal year 2010. While the tax expenditure lists published by Treasury and JCT are generally similar, specific tax expenditures reported by each under the community and regional development budget function differed, as shown in table 8. Four tax expenditures were listed by both under the community and regional development budget function. Another four tax expenditures were reported by both Treasury and JCT but appeared under community and regional development function on one list and under a different budget function on the other list. Fourteen tax expenditures were reported under the community and regional development budget function by either Treasury or JCT, including eight tax expenditures supporting disaster relief and recovery. Whereas JCT lists six disaster tax packages as tax expenditures, Treasury officials told us that disaster-related revenue losses were included in Treasury estimates for specific tax expenditures made available in disaster areas. For example, revenue losses from additional allocations of the Low-Income Housing Tax Credit for the GO Zone were incorporated into Treasury’s Low-Income Housing Tax Credit estimate. To avoid double-counting, we dropped two tax expenditures—credit to holders of Gulf and Midwest tax credit bonds, and employee retention credit for employers in certain federal disaster areas—listed separately by Treasury that were included in the JCT disaster package estimates. We used JCT and Internal Revenue Service (IRS) documents to identify specific tax code provisions within the disaster relief tax expenditures on JCT’s list. Appendix VI lists 45 tax provisions and special rules in the six disaster relief tax expenditures included in JCT’s list. We did not sum disaster revenue loss estimates to avoid double counting amounts already included in estimates for specific tax expenditures. Using our list of community development activities as criteria, we also identified tax expenditures reported by Treasury under other budget functions that appeared to be at least partially intended to support activities we had identified as community development activities. Table 9 includes six tax expenditures reported by Treasury under other budget functions and our rationale for inclusion. Table 10 shows how we categorized the community development tax expenditures as primarily promoting community development versus supporting community development and other federal mission areas. We shared the preliminary universe of community development tax expenditures with Treasury, IRS, Office of Management and Budget (OMB) and CRS. We also shared the preliminary universe with federal agencies helping administer specific community development tax expenditures, including the Community Development Financial Institutions (CDFI) Fund which administers the New Markets Tax Credit; the Department of Housing and Urban Development (HUD) which helps administer the Empowerment Zones and Renewal Communities programs; and the National Park Service (NPS) which helps administer rehabilitation tax credits. We asked these agencies to review the preliminary universe and confirm that the tax expenditures could be used to promote community development, delete tax expenditures that were listed incorrectly or are duplicative, or add tax programs that we had omitted. Based on feedback from federal agencies, we refined the universe of community development tax expenditures as appropriate. We excluded six tax expenditures reported under the community and regional budget function, as shown in table 11. As discussed above, we excluded two disaster tax expenditures listed by Treasury to avoid double counting disaster aid packages listed by JCT. Similarly, we excluded a District of Columbia tax expenditure listed by JCT to avoid duplication with Treasury’s estimate for Empowerment Zones and Renewal Communities. We excluded three tax expenditures listed by Treasury or JCT under the community and regional development budget function that were not specifically linked to community development activities. Our final universe does not include various energy tax expenditures that may be claimed for bank investments used to meet CRA regulatory requirements nor tax expenditures for deductible charitable contributions. Although certain charitable contributions may fund organizations or activities that contribute to community development, we excluded charitable contribution tax deductions from the universe based on external feedback that it is not feasible to isolate the community development portion of the large charitable contributions tax expenditures or link the charitable aid to specific communities. See appendix II for our final universe of 23 community development tax expenditures. This count reflects the number of tax expenditures as reported on the Treasury or JCT lists. Whereas appendix II lists the Empowerment Zones and Renewal Communities (EZ/RC) as a single tax expenditure consistent with Treasury’s list, appendix IV details the various tax incentives available in EZs and RCs. We used Treasury revenue loss estimates for each tax expenditure except in cases where only JCT reported a tax expenditure. Where appropriate, we summed revenue loss estimates to approximate the total federal revenue forgone through tax expenditures that support community development. Certain tax expenditures, including tax credit and direct payment bonds, also have associated outlays, and we included those outlays in presenting total costs. While sufficiently reliable as a gauge of general magnitude, the sum of the individual tax expenditure estimates does not take into account interactions between individual provisions. To identify areas of potential overlap among the tax expenditures, we used the definitions from our March 2011 report on duplication in government programs: Overlap occurs when multiple agencies or programs have similar goals, similar activities or strategies to achieve them, or similar target beneficiaries; Fragmentation refers to circumstances where multiple agencies or offices are involved in serving the same broad area of national need; and Duplication occurs when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries. Using information from prior GAO products, publications from CRS, IRS, JCT, Office of the Comptroller of Currency (OCC), and OMB; as well as documentation from other federal agencies helping administer specific tax expenditures, we compiled publicly available information about each tax expenditure’s design and implementation, including descriptions; specific geographies or populations targeted; volume caps and other allocation limits; and roles of entities within and outside the federal government in administration. Based on the information we collected and the clarifications that the agencies provided, we determined that this descriptive information was sufficiently reliable for the purposes of this engagement to identify potential duplication, overlap, and fragmentation. We reviewed the Internal Revenue Code and IRS regulations to identify allowable interactions or limits on using community development tax expenditures together. Where specified in tax law and regulations, we also identified interactions and limits on using tax expenditures with other federal spending programs. The review of allowable interactions and limits was not exhaustive—we did not search documentation from all federal agencies carrying out community development programs, and regulations for related spending programs may also document interactions between those programs and the community development tax expenditures. To determine what data and performance measures are available and used to assess community development tax expenditures, we identified the data elements and types of information that IRS and federal agencies collect. We also reviewed tax forms, instructions, and other guidance and interviewed IRS officials to determine the types of information that IRS collects on how the tax expenditures in our universe are used. For certain community development tax expenditures in our universe where other federal agencies help with administration—the New Markets Tax Credit, Empowerment Zone/Renewal Community tax incentives, and the rehabilitation tax credits—we reviewed prior GAO reports, and interviewed and collected information from the CDFI Fund, HUD, and NPS to identify their roles in helping administer the tax expenditures and any measures the agencies use to review tax expenditure performance. We also interviewed officials and reviewed documentation from OMB, Treasury, IRS, HUD, and NPS about efforts to assess performance for community development tax expenditures and any crosscutting reviews of related tax and spending programs. For the purposes of this report, we focused on information collected by federal agencies. State and local entities also collect information on some of the tax expenditures included in our universe. For example, housing finance agencies collect data on low-income housing tax credit projects. Similarly, state and local bond financing authorities may have additional data on specific projects and activities funded with federally subsidized bond financing. To determine what previous studies have found about effectiveness for selected tax expenditures, we conducted a literature review for selected tax expenditures—the Empowerment Zone/Renewal Community tax programs, the New Markets Tax Credit program, and tax expenditures available for certain disaster areas. We selected these tax expenditures because they account for most of the 2010 revenue loss for the tax expenditures that primarily promote community development. The EZ tax incentives and the NMTC expired after December 31, 2011. For the EZ/RC and NMTC programs, we focused on literature published since our 2010 reports on these programs. We also selected the rehabilitation tax credits; these multipurpose tax expenditures support community development as well another federal mission area, and they can be used in combination with other community development tax expenditures. We searched databases, such as Proquest, Google Scholar, and Econlit, for studies through May 2011. To target our literature review on effectiveness, we identified studies that attempted to measure the impact of the incentives on certain measures of community development, such as the poverty and unemployment rate. We reviewed studies that met the following criteria: studies that include original data analysis, studies based on empirical or peer-reviewed research, and studies not derived from or sponsored by associations representing industry groups and other organizations that may benefit from adjustments to laws and regulations concerning community development tax expenditures. Using these criteria, we identified and reviewed eight studies on the EZ/RC programs published since our most recent report on the topic. For NMTC, although we did not identify any new studies meeting our criteria, we included a literature review study contracted by CDFI Fund that was intended to provide the groundwork for a forthcoming evaluation and provides an overview of inherent challenges in evaluating community development programs. Additionally, we summarized our prior findings about the selected tax expenditures, and these findings are not generalizable to the universe of community development tax expenditures. For the rehabilitation tax credits, we identified one study that used empirical methods to measure one aspect of community development. We also included an academic study prepared with assistance from NPS that highlights some limitations in attempting to evaluate the effectiveness of the rehabilitation tax credits. For disaster relief incentives, we identified peer reviewed articles that made potentially useful qualitative points, but the articles did not use rigorous or empirical methods to examine effectiveness. See the bibliography for a listing of the studies we reviewed in detail. We conducted this performance audit from January 2011 through February 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Budget function(s) Expiration date (if applicable) Empowerment Zones and Renewal Communities (EZ/RC) 8/10/1993 (EZ); 12/21/2000 (RC) Low-Income Housing Tax Credit (LIHTC) 20 percent credit for rehabilitation of historic structures environment (Treasury); commerce and housing (JCT) 10 percent credit for rehabilitation of structures (other than historic) N/A Community and regional development (Treasury); commerce and housing (JCT) Budget function(s) Expiration date (if applicable) 12/31/2009 environment (Treasury); commerce and housing (JCT) N/A Community and regional development (Treasury); Transportation (JCT) Exclusion of interest on bonds for water, sewage, and hazardous waste facilities environment (Treasury); community and regional development (JCT) 6/28/1968 (water and sewage facilities); 10/22/1986 (hazardous waste facilities) Credit for holders of qualified zone academy bonds (QZAB) Exclusion of interest on public purpose state and local bonds Build America Bonds $1,850 General purpose fiscal assistance (Treasury); community and regional development (JCT) Budget function(s) Expiration date (if applicable) N/A: Not applicable. The EZ and RC programs offered packages of tax incentives in specific designated communities. Appendix IV lists seven EZ and six RC tax incentives. JCT indicated a revenue loss of less than $50 million. The exclusion of interest on public-purpose state and local bonds has been in effect, in one form or another, since the enactment of the Revenue Act of 1913, ch. 16, 38 Stat. 114. JCT indicated a revenue loss of less than $50 million in fiscal year 2010. JCT did not quantify revenue losses for this tax expenditure. See Appendix VI for tax provisions and special rules available for disaster relief and recovery for specific presidentially declared disaster areas Low-Income Housing Tax Credit Exclusion of interest on rental housing bonds Rehabilitation of older structures, subtotal 20 percent credit for rehabilitation of historic structures 10 percent credit for rehabilitation of structures (other than historic) Includes both Recovery Zone Economic Development Bonds and Recovery Zone Facility Bonds. We did not sum total costs of disaster package tax expenditures listed by JCT to avoid double counting estimated revenue losses for Treasury tax expenditures we identified as promoting community development. Total includes $190 million in revenue losses, and $10 million in outlays for fiscal year 2010. Empowerment Zones and Renewal Communities (EZ/RC) Businesses in designated Empowerment Zones (EZ) or Renewal Communities (RC) are eligible to claim various tax incentives, listed below. These incentives may help reduce unemployment, generate economic growth, and stimulate community development and business activity. 30 urban EZs, 10 rural EZs, 28 urban RCs and 12 rural RCs located throughout the United States. These areas consist of Census tracts that are economically depressed and meet statutory or regulatory requirements (based on 1990 Census data) for (1) poverty level, (2) overall unemployment, (3) total population, and (4) maximum required area of EZs or RCs. Additionally, the boundaries of RCs were expanded based on 2000 Census data. The eligibility requirements differed by round, by program, and between urban and rural nominees; for example, round I urban EZs (selected in 1993) were selected using 6 indicators of general distress, including incidence of crime and narcotics use and amount of abandoned housing, while urban and rural ECs (selected in 2000) were selected using 17 indicators, including number of persons on welfare and high school dropout rates. Employment credit (EZ/RC) Businesses may claim an annual tax credit of up to $3,000 or $1,500 for each employee living and working for the employer in an EZ or RC area, respectively. Businesses in EZs and RCs, and employees living and working for the employer in EZs or RCs. Businesses may claim a tax credit of up to $2,400 for each new employee age 18 to 39 living in an EZ/RC, or up to $1,200 for a youth summer hire ages 16 or 17 living in an EZ or RC. Businesses in EZs and RCs, and employees living and working for the employer in EZs or RCs aged 18-39, or youth summer hires ages 16 or 17 living in an EZ or RC. New construction and rehabilitation projects in RCs. Businesses may claim an accelerated method of depreciation to recover certain business costs of new or substantially rehabilitated commercial buildings located in an RC; states may allocate up to $12 million annually per RC for the provision. Increased Section 179 deduction (EZ/RC) Businesses may claim an increased deduction of up to the smaller of $35,000 or the cost of eligible property purchases (including equipment and machinery) for businesses in an EZ/RC. Businesses incurring costs for tangible personal property, such as equipment and machinery, for use in EZs or RCs. Description State and local governments can issue tax-exempt bonds to provide loans to qualified businesses to finance construction costs in EZs. State and local government entities can issue up to $60 million for each rural EZ, $130 million for each urban EZ with a population of less than 100,000, and $230 million for each urban EZ with a population greater than or equal to 100,000. These bonds are not subject to state volume caps. Targeted geographies and populations Large business projects tied to the employment of residents in EZs. Rollover of capital gains (EZ) Owners of businesses located in EZs may be able to postpone part or all of the gain from the sale of a qualified EZ asset that they hold for more than 1 year. Businesses located in EZs. Increased exclusion of capital gains (EZ) Taxpayers can exclude 60 percent of their gain from the sale of small business stock in a corporation that qualifies as an enterprise zone business. Enterprise zone businesses located in EZs. Exclusion of capital gains (RC) Owners of businesses located in RCs can exclude qualified capital gains from the sale or exchange of a qualified community asset held more than 5 years. Businesses located in RCs. New Markets Tax Credit (NMTC) Investors are eligible to claim a tax credit for investing in certified Community Development Entities (CDE) for 39 percent of the investment over 7 years. CDEs, in turn, invest in qualified low-income community investments such as mixed-use facilities, housing developments, and community facilities, which may contribute to employment in low-income communities. Low-income communities defined as Census tracts (1) in which the poverty rate is at least 20 percent, or (2) outside a metropolitan area in which the median family income does not exceed 80 percent of median statewide family income or within a metropolitan area in which the median family income does not exceed 80 percent of the greater statewide or metropolitan area median family income. Low-income communities also include certain areas not within Census tracts, tracts with low population, and Census tracts with high-migration rural counties. Description State and local governments issuing Recovery Zone Economic Development Bonds (RZEDB) allow investors to claim a tax credit (equal to 45 percent of the interest rate established between the buyer and the issuer of the bond). States and localities also had the option of receiving a direct payment from the U.S. Treasury of equal value to the tax credit. Bond proceeds were to be used to fund (1) capital expenditures paid or incurred with respect to property located in the designated recovery zone (e.g., Empowerment Zones or Renewal Communities); (2) expenditures for public infrastructure and construction of public facilities; and (3) expenditures for job training and educational programs. Individuals and corporations can exclude Recovery Zone Facility Bond (RZFB) interest income from their taxable income. Bond proceeds are used by state and local governments to finance projects pertaining to any trade or business, aside from exceptions listed below. More specifically, RZFBs may be issued for any depreciable property that (1) was constructed, reconstructed, renovated, or acquired after the date of designation of a “recovery zone;” (2) the original use of which occurs in the recovery zone; and (3) substantially all of the use of the property is in the active conduct of a “qualified business,” which is defined to include any trade or business except for residential rental facilities or other specifically listed projects under Internal Revenue Code 144(c)(6)(B), including golf courses, massage parlors, and gambling facilities. Targeted geographies and populations RZEDBs and RZFBs target any area designated “recovery zones”, including (1) areas having significant poverty, unemployment, rate of home foreclosures, or general distress; (2) areas that are economically distressed by reason of the closure or realignment of a military installation pursuant to the Defense Base Closure and Realignment Act of 1990; or is (3) any area for which an Empowerment Zone or Renewal Community was in effect as of February 17, 2009. Indian reservations. temporary category of tax-exempt bonds, could exclude that interest income from their taxable income.. Indian tribal governments were allowed greater flexibility to use the bonds to finance economic development projects, which in turn were to promote development on Indian reservations. Previously, Indian tribal governments could only issue tax-exempt bonds for essential government services. Description Businesses on Indian reservations are eligible to claim a tax credit for employing Indian tribal members and their spouses. The credit is for 20 percent of the first $20,000 in wages and health benefits paid to tribal members and spouses. This credit is intended to provide businesses with an incentive to hire certain individuals living on or near an Indian reservation. Targeted geographies and populations Businesses on Indian reservations, and Indian tribal members and spouses. Low-Income Housing Tax Credit (LIHTC) State housing finance agencies (HFA) award the tax credits to owners of qualified rental properties who reserve all or a portion of their units for occupancy for low-income tenants. Once awarded LIHTCs, project owners typically attempt to obtain funding for their projects by attracting third-party investors that contribute equity to the projects. These investors can then claim the tax credits. This arrangement of providing LIHTCs in return for an equity investment is generally referred to as “selling” the tax credits. The credit is claimed over a 10-year period, but a project must comply with LIHTC requirements for 15 years. A 9 percent tax credit—intended to subsidize 70 percent of the qualified basis in present value terms—is available for the costs for new construction or substantial rehabilitation projects not otherwise subsidized by the federal government. An approximately 4 percent tax credit—intended to subsidize about 30 percent of the qualified basis in present value terms—is available for the acquisition costs for existing buildings. The 4 percent credit is also used for housing financed with tax-exempt rental housing bonds. The low-income housing tax credit program is intended to stimulate the production of affordable rental housing nationwide for low-income households. Households with income at or below 60 percent of an area’s median gross income (AMGI). Qualified Census tracts and difficult development areas are eligible for additional credits. In a qualified Census tract, 50 percent or more of the households have incomes of less than 60 percent of the area’s median income. In a difficult development area, construction, land, and utility costs are high relative to the area’s median income. Description Building owners and private investors may qualify to claim a 20 percent tax credit for costs to substantially rehabilitate buildings that are on the National Register of Historic Places or are otherwise certified as historic by the National Park Service (NPS). To be eligible for the credit, buildings must be used for offices; rental housing; or commercial, industrial, or agricultural enterprises. Building owners must hold the building for 5 years after completing the rehabilitation or pay back at least a portion of the credit. The credit is intended to attract private investment to the historic cores of cities and towns. The credit is also intended to generate jobs, enhance property values, and augment revenues for state and local governments through increased property, business and income taxes. Targeted geographies and populations Certified historic buildings either listed individually in the National Register of Historic Places, or located in a registered historic district and certified by NPS as contributing to the historic significance of that district. 10 percent credit for rehabilitation of structures (other than historic) Individuals or corporations may claim a 10 percent tax credit for costs to substantially rehabilitate nonhistoric, nonresidential buildings placed into service before 1936. These structures must retain specified proportions of the buildings’ external and internal walls and internal structural framework. To be eligible for the credit, buildings must be used for offices or commercial, industrial, or agricultural enterprises. Qualified spending must exceed the greater of $5,000 or the adjusted basis (cost less depreciation taken) of the building spent in any 24-month period. The credit is intended to attract private investment to the historic cores of cities and towns. The credit is also intended to generate jobs, enhance property values, and augment revenues for state and local governments through increased property, business and income taxes. Nonresidential buildings placed into service before 1936; especially those located in older neighborhoods and central cities. Tax-exempt organizations may exclude gains or losses from the unrelated business income tax when they acquire and sell brownfield properties on which there has been an actual or threatened release of certain hazardous substances. This exclusion reduces the total cost of remediating environmentally damaged property and may attract the capital and enterprises needed to rebuild and redevelop polluted sites. Environmentally contaminated sites identified as brownfields held for use in a trade or business on which there has been an actual or threatened release or disposal of certain hazardous substances. The exclusion does not target specific geographies or populations. Description Firms may deduct expenses related to controlling or abating hazardous substances in a qualified brownfield property. This deduction subsidizes environmental cleanup and may help develop and revitalize urban and rural areas depressed from environmental contamination. Targeted geographies and populations Environmentally contaminated sites identified as brownfields held for use in a trade or business on which there has been an actual or threatened release or disposal of certain hazardous substances. The deduction does not target specific geographies or populations. Individuals and corporations can exclude private activity bond interest income from their taxable income. Bond proceeds are used by state and local governments to finance the construction of multifamily residential rental housing units for low- and moderate-income families. Low-income housing construction partly financed with the tax-exempt bonds may be used with the 4 percent low-income housing tax credit. Households with incomes at or below 60 percent of an area’s median gross income (AMGI). Individuals and corporations can exclude private activity bond interest income from their taxable income. Bond proceeds are used by state and local governments to finance the construction of government-owned airports, docks, and wharves; mass commuting facilities such as bus depots and subway stations; and high-speed rail facilities and government-owned sport and convention facilities. Infrastructure such as airports, docks, wharves, mass commuting facilities, and intercity rail facilities. The bond provision does not target specific geographies or populations. Individuals and corporations can exclude private activity bond interest income from their taxable income. Bond proceeds are used by state and local governments to finance the construction of water, sewage, and hazardous waste facilities. Infrastructure such as water treatment plants, sewer systems and hazardous waste facilities; the bond provision does not target specific geographies or populations. Credit for holders of qualified zone academy bonds (QZAB) Description Banks, insurance companies, and other lending corporations that purchase qualified zone academy bonds are eligible to claim a tax credit equal to the dollar value of their bonds multiplied by a Treasury-set credit rate. Or, issuers had the option for qualified zone academies to receive a direct payment from the Treasury of equal value to the tax credit. School districts with qualified zone academies issue the bonds and use at least 95 percent of the bond proceeds to renovate facilities, provide equipment, develop course materials, or train personnel in such academies. Business or nonprofit partners must also provide at least a 10 percent match of QZAB funds, either in cash or in-kind donations, to qualified zone academies. The bond program helps school districts reduce the burden of financing school renovations and repairs. Targeted geographies and populations Public schools below the college level that (1) are located in an Empowerment Zone, Enterprise Community or Renewal Community, or (2) have at least 35 percent of their student body eligible for free or reduced-cost lunches. Individuals and corporations can exclude governmental bond interest income from their taxable income. State and local governments generally use bond proceeds to build capital facilities such as highways, schools, and government buildings. Capital facilities owned and operated by governmental entities that serve the public interest. The bond provision does not target specific geographies or populations. Individuals and corporations could claim a tax credit equal to 35 percent of the interest rate established between the buyer and the issuer of the bond. State and local governments issuing BABs also had the option of receiving a direct payment from the Treasury of equal value to the tax credit. Bond proceeds were intended to be used for stimulating development of public infrastructure in communities, as well as to aid state and local governments. If issuers choose to receive a direct payment, then they must use bond proceeds for capital expenditures. No specific geographies or populations are targeted. Areas of Lower Manhattan affected by terrorist attacks occurring on September 11, 2001. Hurricane Katrina disaster area (consisting of the states of Alabama, Florida, Louisiana, Mississippi), including core disaster areas determined by the President to warrant individual or individual and public assistance from the federal government following Hurricane Katrina in August 2005. Gulf Opportunity Zone (GO Zone) Counties and parishes in Alabama, Florida, Louisiana, Mississippi and Texas that warranted additional, long-term federal assistance following Hurricanes Katrina, Rita and Wilma in 2005 were designated as Katrina, Rita and/or Wilma GO Zones. Individuals and corporations affected by the September 11, 2001, terrorist attacks were eligible for seven tax provisions. These provisions included tax-exempt bonds targeted toward reconstruction and renovation; a special depreciation allowance for certain property that was damaged or destroyed; and a tax credit for businesses to hire and retain employees in the New York Liberty Zone. Individuals and corporations affected by hurricanes Katrina, Rita, and Wilma, which struck between August-October 2005, were eligible to claim 33 GO Zone tax provisions. These provisions include tax-exempt bond financing, expensing for certain clean-up and demolition costs, and additional allocations of the New Markets Tax Credit for investments that served the GO Zone. Twenty-four counties in Kansas affected by storms and tornadoes that began on May 4, 2007. Description Individuals and corporations affected by severe storms, tornadoes or flooding in 10 states from May 20-July 31, 2008 were eligible for a package of 26 tax benefits, including tax- exempt bond financing, increased rehabilitation tax credits for damaged or destroyed structures, and suspensions of limitations on claiming personal casualty losses. Qualified small or farming businesses affected by disasters in federally declared disaster areas are eligible to claim a net operating loss for up to 3 years after the loss was incurred, instead of the usual 2 years generally permitted. This credit may allow small and farming businesses in communities declared disaster areas to recoup a portion of their losses following a disaster. Targeted geographies and populations Selected counties in 10 states affected by tornadoes, severe storms and flooding occurring from May 20-July 31, 2008. Individuals and businesses located in any geography declared a disaster area in the United States during tax years 2008 and 2009. Qualified small businesses and farming businesses located in any federally declared disaster area. Qualified small businesses are sole proprietorships or partnerships with average annual gross receipts (reduced by returns and allowances) of $5 million or less during the 3-year period ending with the tax year of the net operating loss. For more information on the bond financing by Indian tribal governments, see GAO, Federal Tax Policy: Information on Selected Capital Facilities Related to the Essential Governmental Function Test, GAO-06-1082 (Washington, D.C.: Sept.13, 2006) and U.S. Department of the Treasury, Report and Recommendations to Congress reqarding Tribal Economic Development Bond provision under Section 7871 of the Internal Revenue Code (Washington, D.C.: Dec. 19, 2011). Volume cap or other allocation limits? Involves administration by a federal agency outside IRS? Involves administration by nonfederal entity? Empowerment Zones and Renewal Communities (EZ/RC) Varied. Five EZ and four RC tax incentives did not have any volume caps or allocation limits. Yes; HUD oversaw EZ programs in urban areas, and the USDA oversaw EZ programs in rural areas. HUD is responsible for outreach efforts and serves as a promoter for EZs and RCs. HUD and IRS established a partnership regarding the EZ/RC tax incentives, where both HUD and IRS provide representation at workshops and conferences. Yes; state and local governments nominate communities for EZ and RC designation. Nominated EZ communities had to submit a strategic plan showing how they would meet key program principles, while nominated RCs had to submit a written “course of action” with commitments to carry out specific legislatively mandated activities. Limit of up to an annual total of $12 million per RC. No; IRS has sole federal responsibility for the administration of the CRD program. HUD collected data from local administrators used for commercial projects in RCs. Yes; state governments allocate CRD authority to eligible businesses engaged in commercial projects within RCs. Limits on issuing EZ facility bond volume were up to $60 million for each rural EZ, up to $130 million for each urban EZ with a population of less than 100,000, and $230 million for each urban EZ with a population greater than or equal to 100,000. No; IRS has sole federal responsibility for the administration of EZ facility bond program. HUD collected information from local administrators of EZs on the use of facility bonds used for construction projects in EZs. Yes; state and local governments issue EZ facility bonds to finance construction costs. Tax expenditure New Markets Tax Credit (NMTC) Yes; the maximum amount of annual investment eligible for NMTCs was $3.5 billion each year in calendar years 2010 and 2011. Volume cap or other allocation limits? Involves administration by a federal agency outside IRS? Yes; the Treasury Community Development Financial Institutions (CDFI) Fund certifies organizations as community development entities (CDE), CDFI Fund also provides allocations of NMTCs to CDEs through a competitive process. The CDFI Fund is responsible for monitoring CDEs to ensure that CDEs are compliant with their allocation agreements through the New Markets Compliance Monitoring System and, on a more limited basis, by making site visits to selected CDEs. The CDFI Fund also provides IRS with access to CDFI data for monitoring CDEs’ compliance with NMTC laws and regulations. Involves administration by nonfederal entity? Yes; once a CDE receives an allocation of tax credits, the CDE can offer the tax credits to investors, who in turn acquire stock or a capital interest in the CDE. The investor can gain a potential return for a “qualified equity investment” in the CDE. In return for providing the tax credit to the investor, the CDE receives proceeds from the offer and must invest “substantially all” of such proceeds into qualified low-income community investments. Yes; the Recovery Zone Economic Development Bond (RZEDB) and Recovery Zone Facility Bond (RZFB) programs had national volume caps of $10 billion and $15 billion, respectively. Yes; Treasury determined the amount of RZEDB and RZFB volume cap allocations received by each state and the District of Columbia based on declines in employment levels for each state and the District during 2008 relative to declines in national employment levels during the same period. Yes; each state was responsible for allocating shares of RZEDB and RZFB volume caps to counties and large municipalities based on declines in employment levels for such areas during 2008 relative to declines in employment levels for all counties and municipalities in such states during the same period. State and local governments issued RZEDBs, and had the option of allowing investors to claim a tax credit for the bonds. States and localities also had the option of receiving a direct payment from the Treasury of equal value to the tax credit. Volume cap or other allocation limits? Yes; the bond program had a $2 billion national volume cap. Involves administration by a federal agency outside IRS? Yes; Treasury allocated bond capacity to Indian tribal governments in consultation with the Secretary of Interior, and the Department of Interior (Interior) maintains updated lists of Indian tribal entities that are eligible to apply for allocations of bond volume. Interior may also issue letters to Indian tribal entities indicating federal recognition of such entities in order to demonstrate eligibility for the bond program. Involves administration by nonfederal entity? Yes; Indian tribal governments applied for Tribal Economic Development Bonds, issued the bonds, and used proceeds from bond sales to finance economic development projects or nonessential governmental activities. Indian tribal governments had the option of allowing investors to claim a tax credit for the bonds. Indian tribal governments also had the option of receiving a direct payment from the Treasury of equal value to the tax credit. Low-Income Housing Tax Credit (LIHTC) Yes; in 2010, the allocation limit was the greater of $2.10 per- capita or $2.43 million for each state, U.S. territory, and the District of Columbia. The per capita amount is subject to cost of living adjustments. No; the IRS has sole federal responsibility for the administration of LIHTC program. However, the program is closely coordinated with HUD housing programs for the computation of the area median gross income (AMGI) used to determine household eligibility and maximum rents, as well as the definition of income. he IRS also uses HUD’s Uniform Physical Condition Standards to determine whether the low-income housing is suitable for occupancy. HUD also maintains a LIHTC database with information on the project address, number of units and low-income units, number of bedrooms, year the credit was allocated, year the project was placed in service, whether the project was new construction or rehabilitation, type of credit provided, and other sources of project financing. Yes; state housing finance agencies (HFA) award LIHTCs to owners of qualified low- income housing projects based on each state’s qualified allocation plan, which generally establishes a state’s selection criteria for how its LIHTCs will be awarded. Additionally, state HFAs monitor LIHTC properties for compliance with Internal Revenue Code requirements, such as rent ceilings and income limits for tenants, and report noncompliance to the IRS. Involves administration by a federal agency outside IRS? Yes; the Secretary of Interior sets Standards of Rehabilitation for claiming the tax credit. Within Interior, NPS maintains a National Register of Historic Places; approves applications for rehabilitation projects proposing use of the 20 percent rehabilitation tax credit; and certifies whether completed projects meet the Secretary’s standards and are eligible for the tax credit. NPS may inspect a rehabilitated property at any time during the five-year period following certification of rehabilitation for claiming the 20 percent preservation tax credit, and NPS may revoke certification if work was not done according to standards set by the agency. NPS also notifies the IRS of such revocations or dispositions so the tax credit may be recaptured. Involves administration by nonfederal entity? Yes; state historic preservation offices (SHPO) review applications and forward recommendations for historic designation of structures to NPS, provide program information and technical assistance to applicants, and conduct site visits. SHPOs may also inspect a rehabilitated property at any time during a five- year period following completion of a rehabilitation project using the tax credit. 10 percent credit for rehabilitation of structures (other than historic) Yes; NPS determines whether buildings in historic districts do not contribute to such districts and, consequently, are not deemed to be historic structures. Such decertification is required before owners of such structures can claim for the 10 percent tax credit. Yes; SHPOs review decertification applications and forward recommendations to NPS, provide program information and technical assistance to applicants. Yes; EPA maintains a National Priority List of properties; such listed properties are ineligible for the tax incentive. Yes; state environmental agencies certify brownfield properties on which there has been an actual or threatened release or disposal of certain hazardous substances. Following certification, taxpayers may incur eligible remediation expenditures and claim the tax provision. Involves administration by a federal agency outside IRS? Yes; EPA maintains a National Priority List of properties; such listed properties are ineligible for the tax incentive. Involves administration by nonfederal entity? Yes; state environmental agencies certify brownfield properties on which there has been an actual or threatened release or disposal of certain hazardous substances. Following certification, site owners may claim the tax deduction, including for some expenditures incurred from prior tax years. Yes, the bond provision is subject to the private activity bond annual volume cap for each state. Yes; state and local governments, typically housing finance agencies, may issue bonds and use proceeds from bond sales to finance the construction of multifamily residential rental housing units for low- and moderate-income families. Varied; bond for the construction of mass commuting facilities, and 25 percent of bond issues for privately-owned intercity rail facilities, are included in the private activity bond annual state volume cap (government-owned facilities are exempted). Yes; state and local governments may issue bonds, and use proceeds from bond sales to finance construction of airports, docks, wharves, mass commuting facilities and intercity rail facilities. Yes, the bond provisions are subject to the private activity bond annual volume cap for each state. Yes; state and local governments may issue bonds, and then use proceeds from bond sales to finance capital improvements for water, sewer and hazardous waste facilities. Tax expenditure Credit for holders of qualified zone academy bonds (QZAB) Volume cap or other allocation limits? Yes; the bond provision has national volume caps of $1.4 billion in 2010, and $400 million in 2011. Involves administration by a federal agency outside IRS? Yes; Treasury determines the credit rate of QZABs and allocates shares of QZAB volume to state education agencies on the basis of the states’ respective populations of individuals below the poverty line (as defined by OMB). Involves administration by nonfederal entity? Yes; state education agencies determine the share of QZAB volume allocated to qualified zone academies, and issues QZABs following approval by local education agencies. Local education agencies issue QZABs after applying for and obtaining permission from states. Business or nonprofit partners provide at least a 10 percent match of QZAB funds, either in cash or in-kind donations, to qualified zone academies. Exclusion of interest on public purpose state and local bonds Build America Bonds (BAB) Yes; state, and local governments may issue bonds, and then use proceeds from bond sales to finance eligible projects—primarily public infrastructure projects such as highways, schools, and government buildings. Volume cap or other allocation limits? Involves administration by a federal agency outside IRS? Involves administration by nonfederal entity? Varied. Authority to designate up to $8 billion in tax-exempt private activity bonds (New York Liberty bonds) and $9 billion in advance refunding bonds. Yes; the Governor of the State of New York and the Mayor of New York City were allowed to issue tax-exempt New York Liberty bonds, and use proceeds to finance reconstruction and renovation projects within the New York Liberty Zone. The Governor and Mayor were allowed to issue advance refunding bonds to pay principal, interest, or redemption price on certain prior issues of bonds issued for facilities located in New York City (and certain water facilities located outside of New York City). Katrina Emergency Act Gulf Opportunity Zone (GO Zone) Varied.within the tax expenditure package have volume caps or other revenue loss limitations. Varied; multiple provisions within the tax expenditure package involved administration by federal agencies besides IRS. Varied; multiple provisions within the tax expenditure package involved administration by state and local governments and other entities. The maximum amount of advance refunding for certain governmental and qualified 501(c)(3) bonds that may have been issued was capped at $4.5 billion in the case of Louisiana, $2.25 billion in the case of Mississippi, and $1.125 billion in the case of Alabama. State and local governments in the GO Zone— Alabama, Louisiana, and Mississippi—issued advance refunding bonds. Gulf Tax Credit Bonds had a volume cap of $200 million for Louisiana, $100 million for Mississippi, and $50 million for Alabama. Yes; Treasury determines the credit rate of Gulf Tax Credit Bonds. State and local governments in the GO Zone— Alabama, Louisiana, and Mississippi—issued Gulf Tax Credit Bonds to help pay principal, interest, and premiums on outstanding state and local government bonds. Volume cap or other allocation limits? The maximum aggregate face amount of GO Zone Bonds that may have been issued in Alabama, Louisiana or Mississippi was capped at $2,500 multiplied by the population of the respective state within the GO Zone; no other states were eligible for tax- exempt bond financing. Involves administration by nonfederal entity? State and local governments in the GO Zone— Alabama, Louisiana, and Mississippi—issued bonds, though state governments approved projects for bond financing. Increased credit cap and other modified provisions for use of the Low-Income Housing Tax Credit (LIHTC) A special allocation of the LIHTC was issued for each of three years (2006, 2007 and 2008) to each of the States within the GO Zone. Each year’s special allocation was capped at $18.00 multiplied by the population of the respective state in the GO Zone. In addition, the otherwise applicable LIHTC ceiling amount was increased for Florida and Texas by $3,500,000 per State. See above description of the LIHTC regarding the involvement of state housing finance agencies (HFA). An additional allocation of the New Markets Tax Credit (NMTC) in amounts equal to $300 million for 2005 and 2006, and $400 million for 2007, were to be allocated among qualified community development entities (CDE) to make qualified low- income community investments within the Gulf Opportunity Zone. See above description of the NMTC regarding involvement of the Community Development Financial Institutions (CDFI) Fund. See above description of the NMTC regarding the involvement of CDEs. Volume cap or other allocation limits? Varied. Multiple provisions within the tax expenditure package have volume caps or other revenue loss limitations. Involves administration by a federal agency outside IRS? Varied; multiple provisions within the tax expenditure package involved administration by federal agencies besides IRS. Involves administration by nonfederal entity? Varied; multiple provisions within the tax expenditure package involved administration by state and local governments. The maximum amount of Midwestern Tax Credit Bonds that may have been issued was capped at: (1) $100 million for any state with an aggregate population located in all Midwest disaster areas within the state of at least 2,000,000; (2) $50 million for any state with an aggregate population located in all Midwest disaster areas within the state of at least 1,000,000 but less than 2,000,000; and (3) $0 for any other state. Yes; Treasury determines the credit rate of Midwestern Tax Credit Bonds. State governments in the Midwest disaster area issued Midwestern tax credit bonds to help pay principal, interest and premiums on outstanding state and local government bonds. The maximum aggregate face amount of Midwestern disaster zone bonds that may have been issued in any state in which a Midwestern disaster area was located, was capped at $1,000 multiplied by the population of the respective state within the Midwestern disaster zone; no other states were eligible for tax- exempt bond financing. State and local governments in the Midwest disaster area issued bonds. Tax expenditure Increased credit cap and other modified provisions for use of the Low-Income Housing Tax Credit (LIHTC) Volume cap or other allocation limits? A special allocation of the LIHTC was issued for each of three years (2008, 2009, and 2010) to any state in which a Midwest disaster area was located. Each year’s special allocation was capped at $8.00 multiplied by the population of the respective state in a Midwest disaster area. Involves administration by nonfederal entity? See above description of the LIHTC regarding the involvement of state housing finance agencies (HFA). Yes; for the provision allowing expensing of environmental remediation costs from disasters, state environmental agencies certify brownfield properties on which there has been an actual or threatened release or disposal of certain hazardous substances as a result of a federally declared disaster. State and local governments had the authority to issue RZEDBs and RZFBs from February 17, 2009 through December 31, 2010. Tribal governments are authorized to issue tax-exempt bonds only if substantially all of the proceeds are used for essential governmental functions or certain manufacturing facilities. Legislation targeted towards the New York Liberty Zone and the Gulf Opportunity Zones (GO Zone) allowed an additional advance refunding to redeem certain prior tax- exempt bond issuances from state and local governments. The provision allowed state and local governments to refund, or refinance, bonds that are not redeemed within 90 days after the refunding bonds are issued. Residential rental property may be financed with tax- exempt facility bonds issued by state and local governments, if the financed project is a “qualified residential rental project” with required ratios of residents with certain income limitations. Under the provision, the operator of a qualified residential rental project may rely on the representations of prospective tenants displaced by reason of certain disasters to determine whether such individual satisfies the income limitation for a qualified residential rental project. Description Mortgage revenue bonds are tax-exempt bonds issued by state and local governments to make mortgage loans to qualified mortgagors for the purchase, improvement, or rehabilitation of owner-occupied residences, and are typically required to exclusively finance mortgages for “first-time homebuyers.” Qualified mortgage revenue bonds, may be issued in targeted disaster areas without a first-time homebuyer financing requirement. Additionally, the permitted amount of qualified home-improvement loans increases from $15,000 to $150,000 for residences in a disaster zone. State and local governments in GO Zones and the Midwest disaster area may have issued tax credit bonds in areas affected by certain disasters. 95 percent of these bonds must be used to (1) pay principal, interest, or premium on outstanding bonds (other than private activity bonds) issued by state and local governments, or (2) make a loan to any political subdivision (e.g., local government) of such state to pay principal, interest, or premium on bonds (other than private activity bonds) issued by such political subdivision. These bonds differed from tax-exempt bonds in that rather than receiving tax- exempt interest payments, bondholders were entitled to a federal tax credit equal to a certain percentage of their investment. Description In certain disaster areas, tax-exempt bonds for qualified private activities may have been issued and were not restricted by aggregate annual state private activity bond limits. These bonds allow state and local governments to finance the construction or rehabilitation of properties following a disaster. Treasury named Series I inflation-indexed savings bonds purchased through financial institutions as “Gulf Coast Recovery Bonds” from March 29-December 31, 2006, in order to encourage public support for recovery and rebuilding efforts in areas devastated by Hurricanes Katrina, Rita, and Wilma. Proceeds from the sale of the bonds were not specifically designated for hurricane relief and recovery efforts. The provision provided a temporary tax credit of 30 percent to qualified employers for the value of employer- provided lodging to qualified employees affected by certain disasters. The amount taken as a credit was not deductible by the employer. Certain disaster relief tax packages included a credit of 40 percent of the qualified wages (up to a maximum of $6,000 in qualified wages per employee) paid by an eligible employer that conducted business in a disaster zone and whose operations were rendered inoperable by the disaster. Description For 2005, the Hope Scholarship Credit rate was 100 percent on the first $1,000 of qualified tuition and related expenses, and 50 percent on the next $1,000 of qualified tuition and related expenses. For 2005, the Hope credit was temporarily increased for students attending eligible educational institutions in the GO Zone to 100 percent of the first $2,000 in qualified tuition and related expenses and 50 percent of the next $2,000 of qualified tuition and related expenses, for a maximum credit of $3,000 per student. For 2006, this provision increased the tax credit again to 100 percent of the first $2,200 of qualified tuition and related expenses (instead of $1,100 under standard law in 2006), and 50 percent of the next $2,200 of qualified tuition and related expenses (instead of $1,100) for a maximum credit of $3,300 per student (instead of $1,650). For 2008 and 2009, the Hope scholarship credit was extended to students attending eligible educational institutions in the Midwestern disaster area, based on increased credit rates enacted in 2006. Individual taxpayers are typically allowed to claim a nonrefundable credit, the Lifetime Learning Credit, equal to 20 percent of qualified tuition and related expenses of up to $10,000 (resulting in a total credit of up to $2,000) incurred during the taxable year on behalf of the taxpayer, the taxpayer’s spouse, or any dependents. The Lifetime Learning Credit rate was temporarily increased from 20 percent to 40 percent for students attending institutions in certain disaster areas. Description The provision increased from 20 to 26 percent, and from 10 to 13 percent, respectively, the preservation credits with respect to any certified historic structure or qualified rehabilitated building located in certain disaster areas, provided the qualified rehabilitation expenditures with respect to such buildings or structures were incurred during an established period of time following the disaster. The LIHTC cap amount increased for affected states within the GO Zones and the Midwestern disaster area. Also, rules concerning implementation of the LIHTC were modified for the GO Zone; in the case of property placed in service from 2006-2008 in a nonmetropolitan area within the GO Zone, LIHTC income targeting rules are applied by using a national nonmetropolitan median gross income standard instead of the area median gross income standard typically applied to low-income housing projects. The provision allowed an additional allocation of NMTCs in an amount equal to $300 million for 2005 and 2006, and $400 million for 2007, to be allocated among qualified community development entities to make qualified low- income community investments within the Katrina GO Zone. Description Individuals whose principle residence were in certain disaster areas or were otherwise displaced from their homes by disasters may have elected to calculate their Earned Income Tax Credit and Refundable Child Credit for the taxable year when the disaster occurred using their earned income from the prior taxable year. Employers hiring and retaining individuals who worked in certain disaster areas were eligible to claim up to $2,400 in Work Opportunity Tax Credits per employee (or 40 percent of up to the first $6,000 of wages). Employees in other targeted categories for the tax credit (e.g., qualified veterans or families receiving food stamps) are typically required to provide certification from a designated local agency of their inclusion in such groups on or before they begin work, or their employer provides documentation to said agencies no later than 28 days after the employee begins work. However, employees who worked and/or lived in certain disaster areas do not require certification from such agencies for employers to qualify for the tax credit. Tax provision or special rule Deductions Carryback of net operating losses (NOL) Under present law, a net operating loss (NOL) is, generally, the amount by which a taxpayer’s business deductions exceed its gross income. In general, an NOL may be carried back 2 years and carried over 20 years to offset taxable income in such years. NOLs offset taxable income in the order of the taxable years to which the NOL may be carried. This provision provided a special 5-year carryback period for NOLs to the extent of qualified disaster losses in any presidentially declared disaster area during 2008 and 2009. Individuals and corporations affected by certain disasters may have carried back NOLs, for a period of 5 years, of the sum of the aggregate amount of deductions from such disasters, including deductions for qualified casualty losses; certain moving expenses; certain temporary housing expenses; depreciation deductions with respect to qualified property in disaster areas for the taxable year the property was placed into service; and certain repair expenses resulting from applicable disasters. A NOL to a farming business may have been carried back for five years if such loss was attributable to any portion of qualified timber property which was located in the Katrina or Rita GO Zones. Description The provision provided an election for taxpayers who incurred casualty losses attributable to certain disasters with respect to public utility property located in applicable disaster zones. Under the election, such losses may be carried back 5 years immediately preceding the taxable year in which the loss occurred. If the application of this provision resulted in the creation or increase of a NOL for the year in which the casualty loss is taken into account, the NOL may be carried back or carried over as under present law applicable to NOLs for such year. The provision provided an election for taxpayers to treat any GO Zone public utility casualty loss caused by Hurricane Katrina as a specified liability loss to which the present-law 10-year carryback period applies. The amount of the casualty loss is reduced by the amount of any gain recognized by the taxpayer from involuntary conversions of public utility property (e.g. physical destruction of such property) located in the GO Zone caused by Hurricane Katrina. Taxpayers who elect to use this provision are not eligible to treat the loss as part of the 5-year net operating loss carryback provided under another provision of the GO Zone Act (see 5-year NOL carryback of public utility casualty losses mentioned above). The provision suspended two limitations on personal casualty or theft losses to the extent those losses arise in certain disaster areas and are attributable to such disasters. First, personal casualty or theft losses meeting the above requirements needed not exceed $100 per casualty or theft; present law at the time contained a required threshold of $100. Second, such losses were deductible without regard to whether aggregate net losses exceed 10 percent of a taxpayer’s adjusted gross income, which was standard under present law at the time the disasters took place. The provision treats personal casualty or theft losses from the pertinent disaster as a deduction separate from other casualty losses. Description The provision removed one limitation on personal casualty or theft losses to the extent those losses arise in federally declared disaster areas during 2008 and 2009. More specifically, losses were deductible without regard to whether aggregate net losses exceed 10 percent of a taxpayer’s adjusted gross income, which was standard under present law at the time the disasters took place. The provision treats personal casualty or theft losses from federally declared disasters as a deduction separate from other casualty losses. However, present law at the time contained a required threshold of $100 for meeting requirements to claim losses, and this provision increases the threshold to $500. These rules are in effect for all federally declared disaster areas in 2008 and 2009 aside from those areas declared “Midwestern disaster areas” from flooding, tornadoes, and storms in 2008. The portion of the provision increasing the limitation per casualty to $500 only applies to 2009. Under present law, a taxpayer’s deduction for charitable contributions of inventory generally is limited to the taxpayer’s basis (typically cost) in the inventory, or if less, the fair market value of the inventory. Under this provision, a C Corporation was eligible to claim an enhanced deduction for qualified book donations. An enhanced deduction is equal to the lesser of (1) basis plus one-half of the item’s appreciation (basis plus one-half of fair market value in excess of basis) or (2) two times basis. Description Under present law, a taxpayer’s deduction for charitable contributions of inventory generally is limited to the taxpayer’s basis (typically cost) in the inventory, or if less, the fair market value of the inventory. Under this provision, any taxpayer, whether or not a C corporation, engaged in a trade or business was eligible to claim an enhanced deduction for donations of food inventory. An enhanced deduction is equal to the lesser of (1) basis plus one-half of the item’s appreciation (i.e., basis plus one- half of fair market value in excess of basis) or (2) two times basis. For taxpayers other than C corporations, the total deduction for donations of food inventory in a taxable year generally may not exceed 10 percent of the taxpayer’s net income for such taxable year from which contributions of apparently wholesome food are made. The provision allowed a taxpayer using a vehicle while donating services to charity for the provision of relief related to certain disasters to compute charitable mileage deduction using a rate equal to 70 percent of the business mileage rate in effect on the date of the contribution, rather than the charitable standard mileage rate generally in effect under law. The provision allowed for qualified contributions up to the amount by which an individual’s contribution base (adjusted gross income without regard to any NOL carryback) or corporation’s taxable income exceeds the deduction for other charitable contributions. Contributions in excess of this amount are carried over to succeeding taxable years subject to limitations under law. The provision allowed an additional first-year depreciation deduction equal to a percentage of the adjusted basis of qualified property; the percentage varies depending on the disaster area where the property is located, e.g., 30 percent for New York Liberty Zone, 50 percent for GO Zones, Kansas Disaster Zone, and other areas in the U.S. declared disaster areas under national disaster relief. A taxpayer was permitted a deduction for 50 percent of qualified disaster clean-up costs, such as removal of debris or demolition of structures, paid or incurred for an established period of time following certain disasters. Under the provision, a taxpayer may have elected to treat any repair of business-related property affected by presidentially declared disasters, including repairs that are paid or incurred by the taxpayer, as a deduction for the taxable year in which paid or incurred. Description Taxpayers may typically elect to deduct (or “expense”) certain environmental remediation expenditures that would otherwise be chargeable to a capital account, in the year paid or incurred. The deduction applies for both regular and alternative minimum tax purposes. The expenditure must be incurred in connection with the abatement or control of hazardous substances at a qualified contaminated site. The provision was extended beyond present law for qualified contaminated sites located in the GO Zone and Midwestern disaster zones, as well as federally declared disaster areas in 2008 and 2009. The length of such extensions depended on the applicable disaster zone. Qualified improvements made on leasehold property in the New York Liberty Zone could have been depreciated over a 5-year period using the straight-line method of depreciation, instead of the 39-year period standard under present law. Qualified leasehold property improvements included improvements to nonresidential real property, such as additional walls and plumbing and electrical improvements made to an interior portion of a building. Description In lieu of depreciation, a taxpayer with a sufficiently small amount of annual investment may elect to expense qualified property placed in service for the taxable year under section 179 of the Internal Revenue Code. Taxpayers in certain disaster areas were eligible to increase the maximum dollar amount of Section 179 expensing for qualified property, which is generally defined as depreciable tangible personal property that is purchased for use in the active conduct of a trade or business. Taxpayers in the New York Liberty Zone could deduct an additional amount up to the lesser of $35,000 or the cost of the qualified Section 179 property put into service during the calendar year. Taxpayers in the GO Zone, Kansas Disaster Zone or disaster zones covered under “National Disaster Relief” could deduct an additional amount up to the lesser of $100,000 or the cost of the qualified Section 179 property put into service during the calendar year. The provision doubled, for certain taxpayers, the present- law expensing limit of $10,000 for reforestation expenditures paid or incurred by such taxpayers for certain periods of time with respect to qualified timber property in the Katrina, Rita and Wilma GO Zones. For example, single taxpayers may have claimed $20,000 instead of $10,000 for eligible reforestation expenditures. Description The Internal Revenue Code allowed an additional first- year depreciation deduction equal to 30 or 50 percent of the adjusted basis of qualified property, including (1) property to which the modified accelerated cost recovery system applies with an applicable recovery period of 20 years or less, (2) water utility property, (3) certain computer software, or (4) qualified leasehold improvement property placed in service by December 31, 2005. Under this provision, the Secretary of Treasury had authority to further extend the placed-in-service date (beyond Dec. 31, 2005), on a case-by-case basis, for up to 1 year for certain property eligible for the December 31, 2005 placed-in-service date under present law. The authority extended only to property placed in service or manufactured in the Katrina, Rita or Wilma GO Zones. In addition, the authority extended only to circumstances in which the taxpayer was unable to meet the December 31, 2005 deadline as a result of Hurricanes Katrina, Rita, and/or Wilma. The provision provided an additional exemption of $500 for each displaced individual of a taxpayer affected by certain disasters. The taxpayer may have claimed the additional exemption for no more than four individuals; thus the maximum additional exemption amount was $2,000. Individuals whose principal residence was located in the Hurricane Katrina core disaster area or certain portions of the Midwestern disaster area on the date that a disaster was declared may generally exclude any nonbusiness debt from gross income, such as a mortgage, that is discharged by an applicable entity on or after the applicable disaster date for an established time period. If the individual’s primary residence was located in the Hurricane Katrina disaster area (outside the core disaster area) or other portions of the Midwestern disaster area, the individual must also have had an economic loss because of the disaster. A taxpayer may have elected not to recognize gain with respect to property that was involuntarily converted, or destroyed, if the taxpayer acquired qualified replacement property within an applicable period, which is typically 2 years. The replacement period for property that was involuntarily converted in certain disaster areas is 5 years after the end of the taxable year in which a gain is realized. Substantially all of the use of the replacement property must be within the affected area. Description The provision provided a temporary income exclusion for the value of in-kind lodging provided for a month to a qualified employee (and the employee’s spouse or dependents) affected by certain disasters by or on behalf of a qualified employer. The amount of the exclusion for any month for which lodging is furnished could not have exceeded $600. The exclusion did not apply for purposes of Social Security and Medicare taxes or unemployment tax. Under the provision, reimbursement by charitable organizations to a volunteer for the costs of using a passenger automobile in providing donated services to charity for relief of certain disasters was excludable from the gross income of the volunteer. The reimbursement was allowed up to an amount that did not exceed the business standard mileage rate prescribed for business use. The provision provided an exception to the 10 percent early withdrawal tax in the case of a qualified distribution of up to $100,000 from a qualified retirement plan, such as a 401(k) plan), a 403(b) annuity, or an IRA. Income from a qualified distribution may have been included in income ratably over 3 years, and the amount of a qualified distribution may have been recontributed to an eligible retirement plan within 3 years. Description In general, under the provision, a qualified distribution received from certain retirement plans in order to purchase a home in certain disaster areas may be recontributed to such plans in certain circumstances. The provision applies to an individual who receives a qualified distribution that was to be used to purchase or construct a principal residence in a disaster area, but the residence is not purchased or constructed on account of the disaster. Under this provision, residents whose principal residence was located in designated disaster areas and who suffered economic loss as a result of such disasters may borrow up to $100,000 from their employer plan. In addition to increasing the aggregate plan loan limit from the usual $50,000, the provision also relaxed other requirements relating to plan loans. The provision permits certain retirement plan amendments made pursuant to changes made under Section 1400Q of the Internal Revenue Code, or regulations issued there under, to be retroactively effective. In order for this treatment to apply, the plan amendment is required to be made on or before the last day of the first plan year beginning on or after January 1, 2007, or such later date as provided by the Secretary of the Treasury. Governmental plans are given an additional 2 years in which to make required plan amendments. The Secretary of the Treasury was required to provide certain administrative relief to taxpayers affected by certain presidentially declared disasters. Such relief allows for postponement of actions required by law, such as filing tax returns, paying taxes, or filing a claim for credit or refund of tax, for an applicable period of time following a disaster. The provision authorized the Secretary of the Treasury to make such adjustments in the application of federal tax laws to ensure that taxpayers did not lose any deduction or credit or experience a change of filing status by reason of temporary relocations caused by applicable disasters. Any adjustments made under this provision must insure that an individual is not taken into account by more than one taxpayer with respect to the same tax benefit. (h) The Katrina Emergency Act package was enacted by the Katrina Emergency Tax Relief Act of 2005 (Pub. L. No. 109-73), and targeted the Hurricane Katrina disaster area (consisting of the states of Alabama, Florida, Louisiana, and Mississippi), including core disaster areas determined by the President to warrant individual or individual and public assistance from the federal government following Hurricane Katrina in August 2005. On enactment, JCT projected total budget effects of $6,109 million for fiscal years 2006 through 2015. The Gulf Opportunity Zone package was enacted by the Gulf Opportunity (GO) Zone Act of 2005 (Pub. L. No. 109-135). Counties and parishes in Alabama, Florida, Louisiana, Mississippi and Texas that warranted additional, long-term federal assistance following Hurricanes Katrina, Rita and Wilma in 2005 were designated as Katrina, Rita and/or Wilma GO Zones. Portions of the Katrina and Rita GO Zones overlapped with counties and parishes eligible for relief under the Katrina Emergency Tax Relief Act. The Gulf Opportunity Zone tax package also included some nondisaster-related tax provisions: election to treat combat pay as earned income for purposes of the Earned Income Tax Credit; modifications of suspension of interest and penalties where IRS fails to contact taxpayer; authority for undercover operations; disclosure of tax information to facilitate combined employment tax reporting; disclosure of return information regarding terrorist activities; disclosure of return information to carry out contingent repayment of student loans; and various tax technical corrections. On enactment, JCT projected total budget effects of $8,715 million for the disaster provisions for fiscal years 2006 through 2015. The Kansas disaster relief package was enacted by the Food, Conservation, and Energy Act of 2008 (Pub. L. No. 110-246). The Kansas disaster relief package targeted 24 counties in Kansas affected by storms and tornadoes that began on May 4, 2007. On enactment, JCT projected total revenue effects of $63 million for the disaster provisions for fiscal years 2008 though 2018. The Midwest disaster relief package was enacted by the Emergency Economic Stabilization Act of 2008, Energy Improvement and Extension Act of 2008, and Tax Extenders and the Alternative Minimum Tax Relief Act of 2008 (Pub. L. No. 110-343). The Midwest disaster relief package targeted selected counties in 10 states affected by tornadoes, severe storms and flooding occurring from May 20-July 31, 2008. The listed components associated with the Midwest disaster relief package do not include rules outlining IRS reporting requirements for contributions to disaster relief; these rules apply for tax returns due after December 31, 2008. On enactment, JCT projected total revenue effects of $4,576 million for the Midwest disaster provisions for fiscal years 2009 through 2018. The National disaster relief package was enacted by the Emergency Economic Stabilization Act of 2008, Energy Improvement and Extension Act of 2008, and Tax Extenders and the Alternative Minimum Tax Relief Act of 2008 (Pub. L. No. 110-343). The National disaster relief package targeted individuals and businesses located in any geography declared a disaster area in the United States during tax years 2008 and 2009. Certain provisions of the National Disaster Relief Act of 2008 do not apply to the Midwest disaster areas because the Heartland and Hurricane Ike Disaster Relief Act, part of the same legislation that resulted in the National Disaster Relief Act, provides other tax benefits. On enactment, JCT projected total revenue effects of $8,091 million for fiscal years 2009 through 2018. The numbers of provisions across the six disaster relief packages exceeds the total number of provisions because some tax provisions and special rules were part of more than one disaster package. In March 2011 and more recently in May 2011, we reported on the potential for duplication among 80 federal economic development programs at four agencies—the Departments of Commerce (Commerce), Housing and Urban Development (HUD), and Agriculture (USDA) and the Small Business Administration (SBA). According to the agencies, funding provided for these 80 programs in fiscal year 2010 amounted to $6.2 billion, of which about $2.9 billion was for economic development efforts, largely in the form of grants, loan guarantees, and direct loans. Some of these 80 programs can fund a variety of activities, including such noneconomic development activities as rehabilitating housing and building community parks. Our work as of May 2011 suggested that the design of each of these 80 economic development programs appears to overlap with that of at least one other spending program in terms of the economic development activity that they are authorized to fund, as shown in table 12. For example, 35 programs can fund infrastructure, and 27 programs can fund commercial buildings. Some of the 80 economic development programs are targeted to economically distressed areas. In February 2012, we reported our findings to date on overlap and fragmentation among 53 economic development programs that support entrepreneurial efforts. Based on a review of the missions and other related program information for these 53 programs, we determined that these programs overlap based not only on their shared purpose of serving entrepreneurs but also on the type of assistance they offer. Much of the overlap and fragmentation among these 53 programs is concentrated among programs that support economically distressed and disadvantaged businesses. In ongoing work that will be published as a separate report, we plan to examine the extent of potential duplication among the 53 programs. In addition to the contact named above MaryLynn Sergent, Assistant Director; Elizabeth Curda; Jeffrey DeMarco; Edward Nannenhorn; Melanie Papasian; Mark Ryan; and Sabrina Streagle made key contributions to this report. To determine what is known about the effectiveness of selected community development tax expenditures, we conducted a literature review of studies that addressed the following tax expenditure provisions: (1) the Empowerment Zone/Renewal Community tax programs; (2) the New Markets Tax Credit program; (3) tax expenditures available for certain disaster areas; and (4) rehabilitation tax credits, including the 20 percent tax credit for preservation of historic structures and the 10 percent tax credit for the rehabilitation of structures (other than historic). We searched databases, including Proquest, Google Scholar, and Econlit, for studies through May 2011. We focused on studies that attempted to measure the impact of the selected tax incentives on certain measures of community development, such as the poverty and unemployment rate. Abravanel, Martin D., Nancy M. Pindus, and Brett Theodos. Evaluating Community and Economic Development Programs: A Literature Review to Inform Evaluation of the New Markets Tax Credit Program. Prepared for the Department of the Treasury Community Development Financial Institutions Fund. The Urban Institute. September 2010. Aprill, Ellen P., and Richard Schmalbeck. “Post-Disaster Tax Legislation: A Series of Unfortunate Events.” Duke Law Journal, vol. 56, no. 1 (2006): 51-100. Bartik, Timothy J. “Bringing Jobs to People: How Federal Policy Can Target Job Creation for Economically Distressed Areas.” Discussion paper prepared for The Hamiltion Project (2010). Busso, Matias, Jesse Gregory., and Patrick M Kline. “Assessing the Incidence and Efficiency of a Prominent Place Based Policy.” National Bureau of Economic Research Working paper no. 16096 (2010). Chernick, Howard and Andrew F. Haughwout. “Tax Policy and the Fiscal Cost of Disasters: NY and 9/11.” National Tax Journal, vol. 59, no. 3 (2006): 561-577. Congressional Research Service. Empowerment Zones, Enterprise Communities, and Renewal Communities: Comparative Overview and Analysis. Washington, D.C.: 2011. Gotham, Kevin F., and Miriam Greenberg. “From 9/11 to 8/29: Post- Disaster Recovery and Rebuilding in New York and New Orleans.” Social Forces, vol. 87, no. 2 (2008): 1039-1062. Ham, John C., Charles Swenson, Ayse Imrohoroglu, and Heonjae Song. “Government Programs Can Improve Local Labor Markets: Evidence from State Enterprise Zones, Federal Empowerment Zones and Federal Enterprise Community. Journal of Public Economics, vol. 95, no. 7-8 (August 2011): 779-797. Hanson, Andrew. “Utilization of Employment Tax Credits: An Analysis of the Empowerment Zone Wage Tax Credit.” Public Budgeting & Finance, vol. 31, no. 1 (2011): 23-36. Hanson, Andrew and Shawn Rohlin. “The Effect of Location-Based Tax Incentives on Establishment Location and Employment across Industry Sectors.” Public Finance Review, vol. 39, no. 2 (2011): 195-225. Hebert, Scott, Avis Vidal, Greg Mills, Franklin James, and Debbie Gruenstein. Interim Assessment of the Empowerment Zones and Enterprise Communities (EZ/EC) Program: A Progress Report. A report prepared for the U.S. Department of Housing and Urban Development. November 2001. Jennings, James. “The Empowerment Zone in Boston, Massachusetts, 2000-2009.” Review of Black Political Economy, vol. 38, no. 1 (2011): 63- 81. Joint Committee on Taxation. Incentives for Distressed Communities: Empowerment Zones and Renewal Communities (JCX-38-09), October 5, 2009. Kolko, Jed and David Neumark. “Do Some Enterprise Zones Create Jobs?” Journal of Policy Analysis and Management, vol. 29, no. 1 (2010): 5-38. Listokin, David, Michael L. Lahr, Charles Heydt, and David Stanek. Second Annual Report on the Economic Impact of the Federal Historic Tax Credit. A report prepared for the Historic Tax Credit Coalition. May 2011. Rich, Michael J., and Robert P. Stoker. “Rethinking Empowerment: Evidence from Local Empowerment Zone Programs.” Urban Affairs Review, vol 45, no. 6 (2010): 775-796. Richardson, James A. “Katrina/Rita: The Ultimate Test for Tax Policy.” National Tax Journal, vol. 59, no. 3 (September 2006): 551-560. Schilling, James D., Kerry D. Vandell, Ruslan Koesman, and Zhenguo Lin. “How Tax Credits Have Affected the Rehabilitation of the Boston Office Market.” Journal of Real Estate Research, vol. 28, no. 4 (2006): 321-348. Stead, Meredith M. “Implementing Disaster Relief Through Tax Expenditures: An Assessment of the Katrina Emergency Tax Relief Measures.” New York University Law Review, vol. 81, no. 6 (2006): 2158- 2191. Tolan, Patrick E, Jr. “The Flurry of Tax Law Changes Following the 2005 Hurricanes: A Strategy for More Predictable and Equitable Tax Treatment of Victims.” Brooklyn Law Review, vol. 72, no. 3 (2007): 799-870. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap, and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. Follow-up on 2011 Report: Status of Actions Taken to Reduce Duplication, Overlap, and Fragmentation, Save Tax Dollars, and Enhance Revenue. GAO-12-453SP. Washington, D.C.: February 28, 2012. Managing for Results: Opportunities for Congress to Address Government Performance Issues. GAO-12-215R. Washington, D.C.: December 9, 2011. Economic Development: Efficiency and Effectiveness of Fragmented Programs Are Unclear. GAO-11-872T. Washington, D.C.: July 27, 2011. Efficiency and Effectiveness of Fragmented Economic Development Programs Are Unclear. GAO-11-477R. Washington, D.C.: May 19, 2011. Managing for Results: GPRA Modernization Act Implementation Provides Important Opportunities to Address Government Challenges. GAO-11-617T. Washington, D.C.: May 10, 2011. Performance Measurement and Evaluation: Definitions and Relationships. GAO-11-646SP. Washington, D.C.: May 2, 2011. Indian Issues: Observations on Some Unique Factors that May Affect Economic Activity on Tribal Lands. GAO-11-543T. Washington, D.C.: April 7, 2011. Government Performance: GPRA Modernization Act Provides Opportunities to Help Address Fiscal, Performance, and Management Challenges. GAO-11-466T. Washington, D.C.: March 16, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington D.C.: March 1, 2011. Recovery Act: Opportunities to Improve Management and Strengthen Accountability over States’ and Localities’ Uses of Funds. GAO-10-999. Washington, D.C.: September 20, 2010. Community Development Block Grants: Entitlement Communities’ and States’ Methods of Distributing Funds Reflect Program Flexibility. GAO-10-1011. Washington, D.C.: September 15, 2010. Revitalization Programs: Empowerment Zones, Enterprise Communities, and Renewal Communities. GAO-10-464R. Washington, D.C.: March 12, 2010. New Markets Tax Credit: The Credit Helps Fund a Variety of Projects in Low-Income Communities, but Could Be Simplified. GAO-10-334. Washington, D.C.: January 29, 2010. Disaster Recovery: Past Experiences Offer Insights for Recovering from Hurricanes Ike and Gustav and Other Recent Natural Disasters. GAO-08-1120. Washington, D.C.: September 26, 2008. Gulf Opportunity Zone: States Are Allocating Federal Tax Incentives to Finance Low-Income Housing and a Wide Range of Private Facilities. GAO-08-913. Washington, D.C.: July 16, 2008. Tax Expenditures: Available Data Are Insufficient to Determine the Use and Impact of Indian Reservation Depreciation. GAO-08-731. Washington, D.C.: June 26, 2008. Tax Policy: Tax-Exempt Status of Certain Bonds Merits Reconsideration, and Apparent Noncompliance with Issuance Cost Limitations Should Be Addressed. GAO-08-364. Washington, D.C.: February 15, 2008. HUD and Treasury Programs: More Information on Leverage Measures’ Accuracy and Linkage to Program Goals is Needed in Assessing Performance. GAO-08-136. Washington, D.C.: January 18, 2008. 21st Century Challenges: How Performance Budgeting Can Help. GAO-07-1194T. Washington, D.C.: September 20, 2007. Leveraging Federal Funds for Housing, Community, and Economic Development. GAO-07-768R. Washington, D.C.: May 25, 2007. Tax Policy: New Markets Tax Credit Appears to Increase Investment by Investors in Low-Income Communities, but Opportunities Exist to Better Monitor Compliance. GAO-07-296. Washington, D.C.: January 31, 2007. Empowerment Zone and Enterprise Community Program: Improvements Occurred in Communities, but the Effect of the Program is Unclear. GAO-06-727. Washington, D.C.: September 22, 2006. Federal Tax Policy: Information on Selected Capital Facilities Related to the Essential Governmental Function Test. GAO-06-1082. Washington, D.C.: September 13, 2006. Rural Economic Development: More Assurance Is Needed That Grant Funding Information Is Accurately Reported. GAO-06-294. Washington D.C.: February 24, 2006. Telecommunications: Challenges to Assessing and Improving Telecommunications for Native Americans on Tribal Lands. GAO-06-189. (Washington, D.C.: January, 11, 2006). Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005. Government Performance and Accountability: Tax Expenditures Represent a Substantial Federal Commitment and Need to Be Reexamined. GAO-05-690. Washington, D.C.: September 23, 2005. A Glossary of Terms Used in the Federal Budget Process. GAO-05-734SP. Washington, D.C.: September 2005. Community Development: Federal Revitalization Programs Are Being Implemented, but Data on the Use of Tax Benefits Are Limited. GAO-04-306. Washington, D.C.: March 5, 2004. New Markets Tax Credit Program: Progress Made in Implementation, but Further Actions Needed to Monitor Compliance. GAO-04-326. Washington, D.C.: January 30, 2004. September 11: Overview of Federal Disaster Assistance to the New York City Area. GAO-04-72. Washington, D.C.: October 31, 2003. Tax Administration: Information Is Not Available to Determine Whether $5 Billion in Liberty Zone Tax Benefits Will Be Realized. GAO-03-1102. Washington, D.C.: September 30, 2003. Economic Development: Multiple Federal Programs Fund Similar Economic Development Activities. GAO/RCED/GGD-00-220. Washington, D.C.: September 29, 2000. Tax Policy: Tax Expenditures Deserve More Scrutiny. GAO/GGD/AIMD-94-122. Washington, D.C.: June 3, 1994.
Tax expenditures—exclusions, credits, deductions, deferrals, and preferential tax rates—are one tool the government uses to promote community development. Multiple tax expenditures contribute to community development. GAO (1) identified community development tax expenditures and potential overlap and interactions among them; (2) assessed the data and performance measures available and used to assess their performance; and (3) determined what previous studies have found about selected tax expenditures’ performance. GAO identified community development activities using criteria based on various federal sources and compared them with authorized uses of tax expenditures. GAO reviewed agency documents and interviewed officials from the Internal Revenue Service (IRS) and five other agencies. GAO also reviewed empirical studies for selected tax expenditures, including the New Markets Tax Credit and Empowerment Zone program which expired in 2011. GAO identified 23 community development tax expenditures available in fiscal year 2010. For example, five ($1.5 billion) targeted economically distressed areas, and nine ($8.7 billion) supported specific activities such as rehabilitating structures for business use. The design of each community development tax expenditure appears to overlap with that of at least one other tax expenditure in terms of the areas or activities funded. Federal tax laws and regulations permit use of multiple tax expenditures or tax expenditures with other federal spending programs, but often with limits. For instance, employers cannot claim more than one employment tax credit for the same wages paid to an individual. Besides IRS, administering many community development tax expenditures involves other federal agencies as well as state and local governments. For example, the National Park Service oversees preservation standards for the 20 percent historic rehabilitation tax credit. Fragmented administration and program overlap can result in administrative burden, such as applications to multiple federal agencies to fund the needs of a distressed area or finance a specific. Limited data and measures are available to assess community development tax expenditures’ performance. IRS only collects information needed to administer the tax code or otherwise required by law, and IRS data often do not identify the specific communities assisted. Other federal agencies helping administer community development tax expenditures also collect limited information on projects and associated outcomes. GAO has long recommended that the Executive Branch improve its ability to assess tax expenditures, but little progress has been made in developing an evaluation framework. Generally, neither these agencies, nor the Department of the Treasury or the Office of Management and Budget (OMB) have assessed or plan to assess community development tax expenditures individually or as part of a crosscutting review. The Government Performance and Results Act Modernization Act of 2010 (GPRAMA) calls for a more coordinated approach to focusing on results and improving performance. OMB is to select a limited number of long-term, outcome-oriented crosscutting priority goals and assess whether the relevant federal agencies and activities—including tax expenditures—are contributing to these goals. These assessments could help identify data needed to assess tax expenditures and generate evaluations of tax expenditures’ effect on community development. Through related GPRAMA consultations agencies are to have with Congress, Congress has a continuing opportunity to say whether it believes community development should be among the limited number of governmentwide goals. While community development was not on the interim priority list, Congress also can urge more evaluation and focus attention on community development performance issues through oversight activities. In part due to data and methodological limitations, previous studies have not produced definitive results about the effectiveness of the New Markets Tax Credit, Empowerment Zone tax incentives, historic rehabilitation tax credits, and tax aid for certain disaster areas. A key methodological challenge is\ demonstrating a causal relationship between community development efforts and economic growth in a specific community. As a result, policymakers have limited information about the tax expenditures reviewed, including those that expired after 2011, and ways to increase effectiveness. Congress may wish to provide OMB guidance on whether community development should be among OMB’s long-term crosscutting priority goals, stress the need for evaluations, and focus attention on addressing community development tax expenditure performance issues through its oversight activities. Two agencies questioned the matters for congressional consideration or findings. GAO believes its analysis and matters remain valid as discussed in the report.
You are an expert at summarizing long articles. Proceed to summarize the following text: The role of women in the military has evolved from the Women’s Armed Services Integration Act of 1948— which afforded women the opportunity to serve in the military services—to January 2013, when the Secretary of Defense and the Chairman of the Joint Chiefs of Staff directed the services to open closed units and positions to women by January 1, 2016. Figure 1 provides details about changes in military service opportunities for women. In January 1994, the Secretary of Defense issued the Direct Ground Combat Definition and Assignment Rule, which allowed women to be assigned to almost all positions, but excluded women from assignment to units below the brigade level whose primary mission was to engage in direct ground combat. The memorandum establishing the 1994 rule also permitted restrictions on assignment of women in four other instances where: (1) the service secretary attests that the costs of appropriate berthing and privacy arrangements are prohibitive; (2) the units and positions are doctrinally required to physically collocate and remain with direct ground-combat units that are closed to women; (3) the units are engaged in long-range reconnaissance operations and special operations forces missions; and (4) job-related physical requirements would necessarily exclude the vast majority of women service members. The memorandum also permitted the services to propose further restrictions on the assignment of women, together with justification for those proposed restrictions. In 2012, DOD issued a report to Congress reviewing the laws, policies, and regulations restricting the service of female members in the armed forces. In this report, the Secretary of Defense and the Chairman of the Joint Chiefs of Staff rescinded the co-location assignment restriction that had allowed the military services to prohibit the assignment of women to units and positions physically collocated with direct ground-combat units.department’s intent to open positions and occupations that had been closed under this restriction. Specifically, the Army opened 6 enlisted occupations (9,925 positions) and 3,214 positions in 80 units that had been closed to women based on the co-location restriction. Additionally, The report also contained notifications to Congress of the the Army, Marine Corps, and Navy requested exceptions to policy and DOD notified Congress of its intent to open positions and occupations at the battalion level within active-duty direct combat units to inform future recommendations on other positions with the potential to be opened in the future. In its report, DOD explained that the experience gained by assigning women to these positions would help the department assess the suitability and relevance of the direct ground-combat prohibition and inform future policy decisions. In July 2013, DOD issued a subsequent report to Congress that discussed the department’s implementation of these February 2012 policy changes, the services’ progress regarding elimination of gender- restrictive policy, and the rescission of the ground-combat assignment This report also included the total number of positions open and rule.closed to women in each of the military services. At the time, the Navy and the Air Force had the most positions open to women (91 and 99 percent, respectively), while the Army and the Marine Corps had fewer open positions (68 and 69 percent, respectively). SOCOM also stated that in July 2013, around 46 percent of its positions were open to women. Figure 2 generally illustrates the process used to implement the Secretary’s direction to open positions and occupations that have been closed to women. The military services traditionally have established two types of physical performance requirements. First, the military services have established general physical fitness standards to promote overall health and physical fitness among military personnel. These fitness standards apply to active and reserve servicemembers regardless of occupation and are not required by statute to be gender neutral. These standards are not intended to ensure performance in a particular occupation. Second, the services set job-specific physical performance standards to ensure that servicemembers are capable of performing the particular jobs to which they have been assigned. These job-specific standards refer to occupation-specific criteria that applicants must meet to enter or remain in a particular career field or specialty, and by statute these occupational performance standards must be gender neutral. The military services and SOCOM have opened selected positions and occupations to women since January 2013, and are in the process of determining whether to open the remaining direct ground-combat positions and occupations. As an alternative to opening a position or occupation, the Secretary of Defense permitted the services to recommend an exception to policy to keep positions or occupations closed to women; to date, the Navy is the only service to have recommended an exception to policy. The services are also conducting studies to identify integration challenges and ways to mitigate these challenges in areas such as unit cohesion, women’s health, equipment, facilities (e.g., separate restrooms and sleeping quarters), women’s interest in serving in ground-combat positions, and international issues. We also examined the issue of sexual assault and harassment in the integration process. In response to the January 2013 memorandum, most of the services— except for the Air Force—and SOCOM have opened selected positions and occupations, and the openings to date largely involve closed positions in open occupations.departments to submit detailed plans by May 15, 2013, to implement this direction to open closed positions to women, and required the implementation plans to be consistent with a set of guiding principles, goals, and milestones for the integration process. The memorandum also required the military departments to submit quarterly progress reports on The memorandum directed the military implementation.implementation plans, including goals and milestones, which were subsequently reviewed by the Secretary of Defense in May 2013. The services and SOCOM also provided quarterly progress reports on their efforts to open closed positions and occupations to women, starting with the third quarter of fiscal year 2013. In July 2014, OUSD(P&R) granted a request by the Joint Chiefs to change the progress report cycle from quarterly to biannual. However, an OUSD(P&R) official stated that the Chairman of the Joint Chiefs of Staff continued to receive quarterly updates, and the Under Secretary of Defense for Personnel and Readiness continued to provide the Secretary of Defense with verbal quarterly updates. All four services and SOCOM developed As of March 2015, the services have opened positions and occupations to women as shown in table 1. The services are working on integration plans for these positions and occupations that have been opened to women. For example, the Army is actively recruiting women to fill recently opened positions across the force, in order to place the best qualified soldiers, regardless of gender, in positions. Further, the Navy is expanding assignment opportunities for enlisted women to specific submarine classes and is participating in surveys and questionnaires to assess integration success and gather lessons learned. At the time of this report, the services and SOCOM were in the process of determining whether to open the remaining closed positions and occupations, and the timeframe for many of these recommendations was postponed until September 2015. As of March 2015, the positions and occupations that remain closed to women are shown in table 2. As of April 2015, all of the military services and SOCOM were working on efforts, such as the standards validation studies discussed below, to inform their recommendations on whether to open the remaining closed positions and occupations to women. The services’ implementation plans included timelines for making recommendations on whether to open positions and occupations to women or to request exceptions to keep positions or occupations closed. Initially, these timelines were established independently by each service and different services were scheduled to make recommendations about similar occupations at different times. For example, the Army was scheduled to make its recommendation about armor occupations in July 2015, while the Marine Corps was scheduled to make its recommendations about armor occupations in late 2014 and early 2015. Subsequently, service officials have stated that some of those recommendation timeframes have shifted to a later point to synchronize with the Marine Corps recommendations that are now scheduled to occur in late September and early October 2015, as shown in figure 3. One reason provided by Air Force officials to support the timeline shifts was to consider impacts of another services’ recommendation to open a closed occupation or position, such as when there is no viable career path in an occupation because the majority of positions serve with another services’ closed unit. Another reason expressed by Army officials was that the service heads recognize the need for coordination when making recommendations about similar occupations such as infantry. An OUSD(P&R) official explained that there has always been a desire to align the recommendation timelines, and that when service timelines started to shift in 2014, the topic was extensively discussed in various meetings. As an alternative to opening a position or occupation, the Secretary of Defense has permitted the services to recommend that the Chairman of the Joint Chiefs of Staff and Secretary of Defense approve an exception As of May to policy to keep positions or occupations closed to women.2015, the Secretary of the Navy was the only military department Secretary to have recommended approval of an exception to policy. The Secretary of the Navy has recommended keeping specific positions closed to the assignment of enlisted women on three classes of ships (frigates, mine countermeasure ships, and patrol coastal craft) that are scheduled to be decommissioned. The rationale for keeping these ship platforms closed to women is in part because they do not have appropriate berthing and because planned decommissioning schedules would mean that modifications would not be a judicious use of resources. Navy officials stated that, while these closed platforms would cause some positions to remain closed to enlisted women, it would not close any occupations to women as there are alternative positions within those occupations on different platforms that are open to women and which provide equal professional opportunity. As of May 2015, none of the other services have requested an exception to keep positions or occupations closed to women or have stated that they plan to request an exception, but the services have all retained the right to request an exception later in the process if they believe there are conditions under which it would be warranted. The services and SOCOM are conducting studies focused on identifying potential integration challenges and developing ways to mitigate these challenges, as shown in figure 4. The studies address issues such as unit cohesion, women’s health, equipment, facilities (e.g., separate restrooms and sleeping quarters), women’s interest in serving in ground-combat positions, and international issues. Most of these studies are ongoing, so it is too early to determine the extent to which the services and SOCOM will follow their planned methodologies for identifying challenges and mitigation strategies, or how the services will implement the findings of the studies. See appendix II for a listing of the studies that each service and SOCOM are conducting in their efforts to integrate women. A common challenge cited in integrating women into previously closed positions and occupations is the potential impact on unit cohesion. Some services are performing studies examining various elements that contribute to unit cohesion. For example, SOCOM, the Army, and the Marine Corps are conducting studies to gauge attitudes toward working with women in integrated units. SOCOM is conducting three studies related to unit cohesion, and SOCOM officials stated that the goal of these studies is to identify potential obstacles and steps to undertake to mitigate those obstacles in an effort to increase their chances of successfully integrating women. For example, SOCOM tasked the RAND Corporation to administer a survey to personnel in closed special operations occupations to discover the attitudes of special operations personnel on the integration of women, including barriers to successful integration and actions to increase the likelihood of success. SOCOM officials stated that initial steps to address concerns raised in the surveys included the Commander of SOCOM holding discussions with his subordinate commanders to provide them information to pass on to their personnel as well as sending an email to all SOCOM personnel to educate the force about what they are doing to validate the standards for special operations positions and why they are validating the standards, and to explain the Joint Staff’s guiding principles that govern the integration effort. The first two of the three studies have been completed, and the RAND study is expected to be completed by July 2015. The Army Research Institute is conducting activities such as surveys, interviews, and focus groups with male and female soldiers assigned to units with newly opened positions and occupations. According to an Army Research Institute official, the institute found that opinions expressed by male soldiers in units assessed at different times since 2012 were less negative a year after female soldiers’ integration, and showed a general shift to more neutral and positive perceptions. The official stated that information from these activities is regularly provided to the Army. These activities will likely be conducted until 2018 as additional occupations are opened, according to an Army Research Institute official. As part of its efforts to identify the potential impacts of integration on unit performance, unit cohesion, and unit members’ individual interactions, the Marine Corps also is conducting a study through the RAND Corporation. The tasks in this study include a review of literature on integration of women in ground combat and other physically demanding occupations, analysis to identify issues most likely to arise with gender integration of Marine Corps infantry as well as initiatives that might be taken to address them, and development of an approach for monitoring implementation of gender integration of the Marine Corps infantry. This study was scheduled to be completed in March 2015. The Marine Corps, the Army, and the Air Force are assessing specific health effects on women when operating in a combat environment. Service officials stated that as women enter direct combat positions, the military will need to make accommodations to address specific health and medical concerns to prevent health problems and to maintain military readiness. For example, the Marine Corps is studying injury prevention and performance enhancement for its training program, including identifying risk factors for injury. This study is scheduled to be completed in August 2015. In addition, according to an Army official, the Army has created a group to review research and data on physical and mental health issues, load carriage, attrition, and performance. Further, the Air Force verified the availability of appropriate medical and psychological support at training locations, and evaluated the medical retention standards for its closed occupations and determined that the existing medical standards were appropriate for both male and female airmen. According to officials from the Defense Advisory Committee on Women in the Services, proper combat equipment is essential to overall military readiness; women suffer injuries and do not perform up to their full potential when wearing ill-fitting equipment and combat gear designed for men’s bodies. The Marine Corps is conducting a study to identify how adapting equipment design, gear weight, physical fitness composition, or standard operating procedures may support successful completion of required tasks. Marine Corps officials explained that these adaptations could potentially remove impediments to success and thereby enable successful integration. For example, the study may be able to identify alternative methods for loading rounds in armored vehicles so that the task does not require as much upper-body strength. This study is scheduled to be completed by June 30, 2015. Further, according to an Army official, the Army has recently redesigned protective gear items and uniforms with specific fits for female soldiers. In addition, the Air Force has identified training locations that will need female-sized equipment and other equipment such as footgear, clothing, and swimsuits. In June 2015, the Under Secretary of Defense for Acquisition, Technology and Logistics issued guidance directing that the Secretaries of the military departments ensure that combat equipment for female servicemembers is properly designed and fitted, and meets standards for wear and survivability. These studies are not being conducted by SOCOM, but instead are being conducted by the services’ special operations components: Army Special Operations Command, Naval Special Warfare Command, Marine Corps Special Operations Command, and Air Force Special Operations Command. rooms for women. Further, all four of the special operations components conducted assessments that determined whether any facilities changes were needed to integrate women. All services are studying the propensity (i.e., interest or tendency) of women to serve in selected closed positions and occupations. Officials from the services noted concerns that large numbers of women may not be interested in serving in currently closed ground-combat positions and occupations. Officials from all of the services stated that the integration of women into previously closed positions and occupations would be an asset in finding the best person for the job, and that outreach and recruitment of women for the officer corps is critical to ensuring that our nation’s military has the strongest possible leaders. For example, the Marine Corps conducted a study using surveys, market research, available literature and other information to determine the interest of men and women in both the Marine Corps overall and in ground-combat specialties to better understand potential changes in the recruiting market due to the opening of ground-combat arms specialties and units. This The Army has joined other study was completed in November 2014. services in creating advertising campaigns to increase women’s interest in selected positions and occupations. SOCOM, the Marine Corps, and the Army are conducting or have conducted international studies analyzing various integration issues. Army Special Operations Command is studying the roles of women to determine how local forces and communities may react to female special forces soldiers. One of the tasks of this study is to provide insights on how the roles of women in different regions and countries may affect the response of local forces and communities to females as Army special forces soldiers. This study is scheduled to be completed before SOCOM is expected to submit its recommendations to the department in September 2015. The Marine Corps also worked with RAND to study other countries with gender-integrated militaries and the practices those countries used for their integration processes. This study was completed in March 2015. According to an Army official, the Army has worked with the U.S. Army Training and Doctrine Command on international comparisons with other countries with integrated armies. This effort was part of the Army’s gender-integration study, which is scheduled to be completed in September 2015. In addition to the challenges reviewed by the services in their studies, we examined the issue of sexual assault and harassment in the integration process. This issue was raised in materials from the Defense Advisory Committee on Women in the Services as a continuing concern related to tracking servicemembers who committed a sex-related offense. According to officials from all services and DOD’s Sexual Assault and Prevention Response Office—which has authority, accountability, and oversight of the department’s sexual assault prevention and response program—sexual assault and harassment are not inhibitors to the integration of women into previously closed positions and occupations. Officials from all of the services consistently noted that prevention of sexual assault and harassment is a department-wide effort and is not a specific focus of integration efforts. They noted that they consider it to be more of a leadership challenge than an integration challenge. DOD officials said that sexual assault and harassment is not a function of integration and is not gender specific only for women; it affects men and women, and exists in male-only units. In March 2015, we reported that based on survey data, it is estimated that in 2014, about 9,000 to 13,000 male active-duty servicemembers were sexually assaulted, and we also estimated that a much lower percentage of men report their sexual assaults compared to women. The military services and SOCOM are working to address statutory requirements and Joint Staff guidance for validating physically demanding occupational standards by initiating several studies. We identified five elements that the services and SOCOM must address as part of the standards validation process. We compared the five elements to the services’ and SOCOM’s planned steps and methodologies in their studies and determined that their study plans contained steps that, if carried out as planned, potentially address all five elements, as summarized in figure 5. However, the studies had not yet been completed at the time of our review; therefore, we could not assess the extent to which the studies will follow the planned steps and methodologies or report how results of the studies will be implemented. See appendix II for a complete listing of the planned studies that each service and SOCOM are conducting in their efforts to integrate women. The statutory requirements for validating gender-neutral occupational standards direct that any military career designator open to both men and women may not have different standards on the basis of gender. The statute further states that for military career designators where specific physical requirements for muscular strength and endurance and cardiovascular capacity are essential to the performance of duties, those requirements must be applied on a gender-neutral basis. To address this requirement, according to service and SOCOM officials and their respective plans, officials will develop one set of occupational standards for each position that will be applicable to both men and women. One example of this type of effort is the Marine Corps’ Ground Combat Element Integrated Task Force, which is to provide the Marine Corps the opportunity to review and refine gender-neutral occupational standards as it evaluates the performance of men and women in integrated units. All of the services’ efforts are to be completed by the end of September 2015. By statute, the Secretary of Defense must ensure that the gender-neutral occupational standards accurately predict performance of the actual, regular, and recurring job tasks of a military occupation, and are applied equitably to measure individual capabilities. The services’ and SOCOM’s plans for studies to validate operationally relevant and gender- neutral occupational standards involve identifying the physically demanding tasks required for the specific occupation under study. To address this requirement, all of the services’ and SOCOM’s plans that we reviewed are taking steps to identify the physically demanding tasks required for each occupation. For example, the Army and the Air Force have undertaken detailed job analyses to identify and define the critical physically demanding tasks and the physical abilities needed to perform them. By observing performance of the tasks and surveying subject- matter experts to confirm the specific tasks required for each occupation, the planned approach intends to confirm that the appropriate tasks have been identified and described. Additionally, the Marine Corps’ Ground Combat Element Integrated Task Force plans to quantify tasks, conditions, and standards for job tasks that have previously been qualitative. In March 2015, the Under Secretary of Defense for Personnel and Readiness provided implementing guidance for this statutory requirement, and directed the Secretaries of each military department to provide a written report regarding their validation of individual occupational standards by September 30, 2015, and to require each military department’s Inspector General to implement a compliance inspection program to assess whether the services’ occupational standards and implementing methodologies are in compliance with statutory requirements. Joint Staff guidance directs the services to validate their occupational performance standards. One of the Chairman’s guiding principles stated that the services must validate occupational performance standards, both physical and mental, for all military occupational specialties, specifically those that remain closed to women. To address this requirement, all of the services and SOCOM are conducting studies to validate the occupational standards for the positions that have been closed to women. The Army’s Training and Doctrine Command and Research Institute of Environmental Medicine are planning to complete by September 2015 the development and validation of gender-neutral occupational testing procedures for entry into the seven military occupational specialties that are closed to women. The Marine Corps opened certain entry-level training schools that previously were closed to women, such as Infantry Training Battalion and Infantry Officer Training, to obtain data on the physical and cognitive/academic demands on female volunteers in these schools. According to Marine Corps officials, this effort will be completed in June 2015. Another Marine Corps effort, projected for completion in June 2015 with a final report by August 2015, is the Ground Combat Element Integrated Task Force. This effort is expected to train female Marine volunteers in skills and tasks performed in closed occupations skills while a dedicated research team observes their performance in both entry-level training and operational environments. Both of these efforts are expected to assist the Marine Corps in validating its standards. In July 2014, the Navy Manpower Analysis Center reviewed all Navy positions to identify those that are physically demanding, and independently reviewed and updated occupational standards for all positions to ensure gender neutrality. The Air Force Air Education and Training Command is planning to complete by July 2015 a study that analyzes and validates physical tests and standards on Battlefield Airmen career fields. A second Air Force study is expected to revalidate physical and mental occupational entry standards across specialties; this study is expected to be completed in September 2015. The special operations components—the Army Special Operations Command, Naval Special Warfare Command, Marine Corps Special Operations Command, and Air Force Special Operations Command—are validating standards for those military occupational specialties that deploy with SOCOM; this is expected to be completed by the end of July 2015. The Chairman’s guiding principles also require that eligibility for training and development within designated occupational fields consist of qualitative and quantifiable standards reflecting the knowledge, skills, and abilities necessary for each occupation. To address this requirement, the services and SOCOM have planned studies that aim to validate and select tests to ensure the tests are measuring what they intend to. Further, these plans aim to ensure that scores or results from a test can be used to select individuals for a particular occupation or task. For example, the Air Force is designing physical task simulations, such as climbing a ladder (to simulate entering and exiting a helicopter, according to officials) and lifting and holding objects at different heights (to simulate holding an item to bolt onto an airframe, according to officials). These planned measures of performance are intended to ensure that simulations are good approximations of job tasks. Air Force officials explained that the Air Force’s planned approach is to use the operationally-relevant, occupationally-specific critical tasks it identifies as the anchor to develop appropriate physical tests and standards to evaluate the ability to successfully perform operational requirements. This study is expected to be completed by the end of fiscal year 2015. Another Chairman’s guiding principle requires the services to take action to ensure the success of the warfighting forces by preserving unit readiness, cohesion, and morale. To address this requirement, the services and SOCOM are taking steps to ensure that the integration of women maintains readiness. For example, officials from each of the services stated that the standards-validation efforts will ensure that service members in newly opened occupations are able to perform the mission and thus maintain readiness, operational capability, and combat effectiveness. By observing performance of the tasks and surveying subject-matter experts to confirm that specific tasks are required for each occupation, the services and special operations components plan to confirm those specific tasks that are required for each occupation. Further, as discussed earlier, the Army, Marine Corps, and SOCOM are conducting studies to determine the potential effect of integration on unit cohesion.the Services, a common challenge cited in integrating women into previously closed positions and occupations is the potential effect on unit cohesion. Unit cohesion contributes to strong morale and commitment to a mission. By taking steps to identify and address challenges related to unit cohesion, these services are working to ensure that readiness is maintained throughout the integration process. DOD has been tracking, monitoring, and providing oversight over the services’ and SOCOM’s efforts to integrate women into ground-combat positions, but has not developed plans to monitor long-term integration progress. Service requests for an exception to policy to keep positions closed to women receive attention from the Chairman of the Joint Chiefs of Staff and the Secretary of Defense. OUSD(P&R) and Joint Staff manage the statutorily required congressional notification process, which is part of a longer process before women can begin serving in newly opened positions and occupations. To oversee the services’ and SOCOM’s efforts to integrate women into combat positions, OUSD(P&R) and the Chairman of the Joint Chiefs of Staff have issued guidance, commissioned studies, and facilitated coordination and communication through regular meetings among the services and SOCOM. The Secretary of Defense’s memorandum rescinding the 1994 rule directed the military departments to submit implementation plans and quarterly progress reports to the Chairman of the Joint Chiefs of Staff and to the Under Secretary of Defense for Further, Standards for Internal Control in the Personnel and Readiness.Federal Government states that ongoing monitoring should be performed continually in the course of normal operations, and should include regular management and supervisory activities, separate evaluations, and policies and procedures to ensure that findings of reviews are promptly resolved. DOD, memorandum from the Under Secretary of Defense for Personnel and Readiness, Elimination of the 1994 Direct Ground Combat Definition and Assignment Rule (Feb. 27, 2013); DOD, memorandum from the Chairman of the Joint Chiefs of Staff, Women in the Service Implementation Plan (Jan. 9, 2013). standards.reports as part of its normal oversight process, OUSD(P&R) has discussed with the services topics such as past and upcoming milestones, recommendation timelines, and the status and progress of ongoing studies. A Joint Staff official explained that the reports are reviewed to ensure progress is being made in accordance with the services’ implementation plans. After the reports are reviewed by OUSD(P&R) and Joint Staff, the Chairman provides these reports to the Secretary of Defense. An OUSD(P&R) official stated that when reviewing these Further, to help in its oversight of the services’ and SOCOM’s standards validation efforts, OUSD(P&R) tasked the RAND Corporation to conduct a study concerning validation of gender-neutral occupational standards within the services and SOCOM; an OUSD(P&R) official stated that the study will provide an independent analysis of the services’ efforts to validate standards. The first objective of the RAND study is to describe best-practice methodologies for establishing gender-neutral standards for physically demanding jobs, tailored to address the needs of the military. The second objective is to review and evaluate the methodologies used by the services to set gender-neutral standards. In September 2013, RAND issued a draft report addressing the first objective; an OUSD(P&R) official stated that OUSD(P&R) provided a draft of this report to all of the services. In June 2015, RAND officials said that a draft of the second report, which will cover both objectives, is forthcoming. Moreover, OUSD(P&R) has regular quarterly meetings with the services to discuss topics such as developing the quarterly reports and how others are handling any issues with integration. The Joint Staff also has a meeting process with two different levels of meetings devoted solely to integration efforts: (1) a Joint Chiefs of Staff (four-star level) group, and (2) an Operations Deputies (three-star level) group. A Joint Staff official explained that these meetings provide a forum for the services to share implementation updates, discuss potential barriers, and highlight issues. RAND’s draft report identified as best practices a six-step process for establishing requirements for physically demanding occupations. These six steps are: (1) identify physical demands; (2) identify potential screening tests; (3) validate and select tests; (4) establish minimum scores; (5) implement screening; and (6) confirm tests are working as intended. The meetings occur at least once every quarter, but can occur more often if needed. OUSD(P&R) and Joint Staff officials stated that there are also frequent communications by other means for the same purposes. For example, in September 2014, SOCOM hosted a workshop for all of the services to review the standards validation process for special operations and the services. SOCOM officials stated that the purpose of this workshop was to ensure that all the services were using similar processes, that no one was working at cross purposes, and that there was no duplication of effort. Officials stated that a follow-up workshop was held in May 2015. The Secretary of Defense and Chairman of the Joint Chiefs of Staff directed that any recommendation for an exception to policy to keep an occupation or position closed to women must be personally approved first by the Chairman and then by the Secretary of Defense. The memorandum states that this approval authority may not be delegated. OUSD(P&R) and Joint Staff officials explained that before such requests are submitted to the Chairman, they are first reviewed for sufficiency by OUSD(P&R) and the Joint Staff. When reviewing any requests for an exception to policy to keep positions closed to women, the Secretary of Defense’s January 2013 memorandum states that “xceptions must be narrowly tailored and based on a rigorous analysis of factual data regarding the knowledge, skills and abilities needed for the position.”According to OUSD(P&R) and Joint Staff officials, if an exception to policy is requested, they will request all related supporting data and studies and review the request considering all of the factors involved. They stated that once they are satisfied that the Secretary’s criteria have been met, they will present the request to the Chairman and then the Secretary to determine whether the request meets the criteria for an exception. According to OUSD(P&R) and Joint Staff officials, they made a conscious decision not to provide or develop specific additional criteria or a format for exception to policy requests—beyond the guidance in the Secretary’s memorandum—because they did not want it to appear that there was a checklist for requesting an exception to policy. When OUSD(P&R) and Joint Staff first reviewed the Navy’s July 2014 exception to policy request for the three different ship classes, they jointly requested additional information from the Navy, such as actual modification costs to enable the ships to provide berths for women, officer assignment information, and information on the professional development impact if women do not serve on those ships. An OUSD(P&R) official explained that OUSD(P&R) and Joint Staff worked with the Navy so the Navy would better understand the additional analytical rigor being requested, and they established a deadline for the Navy to provide the requested information in February 2015. The Navy submitted the requested information, and as of April 2015, OUSD(P&R) and Joint Staff officials said the exception to policy request was under review by the Chairman of the Joint Chiefs of Staff, who will then forward his recommendation to the Secretary of Defense. SOCOM’s status as an operational command results in a slightly different process for any exception requests for positions associated with SOCOM. SOCOM officials explained that for any positions associated with SOCOM—whether there is a recommendation to open a position or a request for an exception to policy to keep a position closed to women— there are two recommendations provided. One recommendation comes from the position’s parent department Secretary. The second recommendation comes from the SOCOM Commander, and since SOCOM is not a military service that recommendation is then reviewed and approved by the Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict, who serves a military department secretary function for SOCOM. The officials stated that to date there have not been differences between the recommendations from the services and from SOCOM. SOCOM officials explained that there is regular collaboration with the services about recommendations, but that in the event that there was a difference in the two recommendations, the Secretary of Defense would make the decision. As of May 2015, OUSD(P&R) had not developed plans for a mechanism or process to monitor the services’ progress in their efforts to integrate newly opened positions and occupations after January 1, 2016. As noted earlier, Standards for Internal Control in the Federal Government states that ongoing monitoring should be performed continually in the course of normal operations. An OUSD(P&R) official stated that OUSD(P&R) will continue to provide oversight as part of its normal responsibilities, and make associated changes in applicable DOD guidance. Further, as discussed earlier, the Under Secretary of Defense for Personnel and Readiness issued guidance that directed each military department to report on its validation of occupational standards, and to implement an inspection program to assess whether the services’ occupational standards comply with statutory requirements.an OUSD(P&R) official, that office does not envision undertaking a formal role in the implementation of the services’ recommendations to open closed positions following January 2016. Further, a Joint Staff official stated that an initial Joint Staff meeting would be held after the January 1, 2016 announcement, and it would be determined at that time whether any additional meetings would be held. OUSD(P&R)’s requirement for the services to submit quarterly progress reports ends in January 2016, and the services have varying plans to monitor implementation after that date. For example, Army officials stated that they have developed an implementation and follow-up plan for beyond 2016 that is being reviewed by senior leaders. Marine Corps officials explained that they have long-term research that will track integration of females, to help understand and shape institutional and individual success, while Navy officials explained that they had not developed any plans to monitor implementation after 2016 and were waiting for direction from OUSD(P&R). However, OUSD(P&R) and Joint Staff officials did not identify any plans to provide such direction for the services to monitor implementation. After the decisions have been made to open positions and occupations to women, there is a lengthy implementation process before women will be able to serve in the newly opened occupations. Officials from all of the services and SOCOM stated that before women can serve in newly opened positions and occupations they must first be recruited, accessioned, trained, tested, and assigned. As an example of the time involved in just one part of the implementation process, according to OUSD(P&R) officials, the general training timelines can vary by service and the position and occupation, but typically may require less than half a year to almost two years to complete the training part of the implementation process. Without ongoing monitoring of the services’ and SOCOM’s implementation progress in integrating previously closed positions and occupations, it will be difficult for DOD to have visibility over the extent to which the services and SOCOM are overcoming potential obstacles to integration and DOD will not have information for congressional decision makers about the department’s integration progress. OUSD(P&R) and Joint Staff manage the congressional notification process when positions and occupations are being opened to women. By statute, the Secretary of Defense must provide Congress with a report prior to implementing any proposed changes that would result in opening or closing any category of unit or position, or military career designator to women. As part of the process for opening formerly closed positions and occupations, an OUSD(P&R) official explained that OUSD(P&R) analyzes information provided by the military department secretaries and the Assistant Secretary of Defense for Special Operations and Low- Intensity Conflictspecialties (if applicable), that all appropriate additional skill identifiers are included, and that the correct number of positions to be opened is reflected. OUSD(P&R) officials then create a packet to send to Congress after they prebrief the House and Senate Armed Services Committees. A Joint Staff official stated that Joint Staff also reviews the notifications, and provides comments on the briefings given to Congress. and verifies items such as the correct occupational In the congressional notifications of the department’s intent to open positions and occupations to women, DOD is required to provide a detailed legal analysis regarding the legal implications of the proposed change with respect to the constitutionality of the application of the 1948 Military Selective Service Act to males only. This act empowers the President to require the registration of every male citizen and resident alien between the ages of 18 and 26. In 1981, the Supreme Court upheld the constitutionality of the male-only registration requirement. Currently, women serve voluntarily in the U.S. armed forces, but are not required to register with the Selective Service and would not be subject to a draft. DOD’s legal analyses in the congressional notifications submitted since January 2013 have not found that opening the positions and occupations to women would affect the constitutionality of the act. Officials from OUSD(P&R), the services, and the Defense Advisory Committee on Women in the Services have stated that if DOD decides to open ground-combat occupations such as infantry, artillery, and armor, DOD’s required legal analysis could raise concerns about the constitutionality of the act. DOD’s legal analysis in the March 2015 congressional notification to open the Army combat engineer occupation stated that “ver time, however, the opening of additional combat positions to women may further alter the factual backdrop to the Court’s decision in Rostker. Should the constitutionality of the [Military Selective Service Act] be challenged at a later date, the reasoning behind the exclusion of women from registration may need to be reexamined.” An OUSD(P&R) official explained that even if DOD’s legal analysis raises constitutionality concerns about the act, DOD could still submit the notification to Congress and take actions to implement opening those positions to women after completion of the waiting period. After a notification is provided to Congress, the Secretary of Defense is prohibited from implementing any proposed changes until “after the end of a period of 30 days of continuous session of Congress (excluding any day on which either house of Congress is not in session) following the date on which the report is received.” This waiting period allows Congress time to take any legislative actions that it deems necessary based on the notification and report provided by DOD. However, the congressional calendar has resulted in an average time period of about 90 calendar days before planned changes could be implemented; three of the twelve congressional notifications DOD submitted between April 2013 After the waiting period has and July 2014 have taken almost 5 months.passed, OUSD(P&R) notifies the appropriate elements within a service so that they can begin implementing actions to open the positions. Since the services are allowed to take actions to open positions only after the waiting period is over, Army and Navy officials said that the delays and unpredictability associated with the waiting period pose challenges in beginning the recruiting, accession, and training processes, and aligning assignments to newly opened positions with service promotion cycles. An OUSD(P&R) official stated that in 2014 DOD was requested to provide drafting assistance on a legislative proposal for a change that would have modified the waiting period from 30 days of continuous session of Congress to 60 calendar days, but said that Congress did not act at that time. In 2012, we assessed the military necessity of the Selective Service System and examined alternatives to its current structure. We found that because of its reliance and emphasis on the All Volunteer Force, DOD had not reevaluated requirements for the Selective Service System since 1994, even though the national security environment had changed significantly since that time. The registration system in fiscal year 2014 had an annual budget of $22.9 million;system provides a low-cost insurance policy in case a draft is ever necessary. In our 2012 report, we recommended that DOD (1) evaluate DOD’s requirements for the Selective Service System in light of recent strategic guidance and report the results to Congress; and (2) establish a process of periodically reevaluating DOD’s requirements for the Selective Service System in light of changing threats, operating environments, and strategic guidance. DOD officials stated that the In responding to these recommendations, DOD stated in February 2013 that there was no longer an immediate military necessity for the Selective Service System, but there was a national necessity because the registration process provides the structure for mobilization that would allow the services to rapidly increase in size if needed. DOD’s assessment was limited to a reevaluation of mission and military necessity for the Selective Service System. Regarding the second recommendation, DOD had not taken action as of June 2015, but agreed that a thorough assessment of the issue was merited, and should include a review of the statutes and policies surrounding the current registration process and the potential to include the registration of women. However, DOD officials stated that such a review should be part of a broader national discussion and should not be determined only by DOD. As we noted in our 2012 report, a reevaluation of the department’s personnel needs for the Selective Service System in light of current national security plans would better position Congress to make an informed decision about the necessity of the Selective Service System or any other alternatives that might substitute for it. For example, a 2013 Congressional Research Service report noted the Selective Service issue could become moot by terminating Selective Service registration or expanding registration requirements to include women. We agree that this is a broader issue. DOD is the agency that would use the Selective Service System in the event a draft was needed. Thus, we continue to believe that our 2012 recommendation has merit—that DOD should take the lead in conducting an evaluation of requirements for the Selective Service System and should establish a process of periodically reevaluating DOD’s requirements for the Selective Service System in light of changing threats, operating environments, and strategic guidance. The Secretary of Defense and the Chairman of the Joint Chiefs of Staff have ordered that women, to the extent possible, be integrated into direct ground-combat positions and occupations by January 2016. Although OUSD(P&R) and Joint Staff have been tracking, monitoring, and providing oversight of the services’ and SOCOM’s integration efforts, they do not have plans to monitor the services’ implementation progress after January 2016, as newly opened positions are integrated. Without ongoing monitoring of the services’ and SOCOM’s progress in integrating previously closed positions and occupations after January 2016, it will be difficult for DOD to have visibility over the extent that the services and SOCOM are overcoming potential obstacles to integration and DOD may not be able to provide current information for congressional decision makers about the department’s progress. Further, DOD has not established a process to reevaluate its requirements for the Selective Service System that could enable it to take into account these changes in expanding combat service opportunities for women. If DOD conducted a comprehensive reevaluation of the department’s personnel needs for the Selective Service System, the analysis would better position Congress to make an informed decision about the necessity of the Selective Service System or any other alternatives that might substitute for it. To help ensure successful integration of combat positions that have been opened to women, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to develop plans for monitoring after January 2016 the services’ implementation of their integration efforts and progress in opening positions to women, including an approach for taking any needed action. We provided a draft of this report to DOD for review and comment. In written comments, which are reprinted in their entirety in appendix III, DOD concurred with our recommendation. DOD noted that they recognize the importance of monitoring long-term implementation progress of expanding combat service opportunities for women. DOD also provided technical comments, which we have incorporated in the report where appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Chairman of the Joint Chiefs of Staff, and the Secretaries of the military departments. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report assesses the Department of Defense’s (DOD) efforts to expand combat service opportunities for women. Our scope included efforts of the four military services in DOD and U.S. Special Operations Command (SOCOM) since January 2013, when the Secretary of Defense eliminated the prohibition on women serving in combat positions. We did not include the Coast Guard in our review. Table 3 contains a list of the agencies we contacted during our review. To determine the status of service efforts to open previously closed positions and occupations and the extent potential challenges have been identified and mitigated, we analyzed documentation and spoke with officials to identify the positions and occupations that have been opened to women, that remain closed, timeframes for making decisions, whether any services planned to keep any positions or occupations closed to women, and any steps taken to identify potential challenges and develop approaches to overcome any such challenges. Specifically, we reviewed guidance provided to the services from the Secretary of Defense, the Chairman of the Joint Chiefs of Staff, and the Under Secretary of Defense for Personnel and Readiness to determine what the services were required to do as part of their efforts to determine whether to open closed positions and occupations to women. We determined that the services were required to, among other things, develop implementation plans, follow five guiding principles when opening positions and occupations to women, and create and submit quarterly progress reports starting in the third quarter of fiscal year 2013. At the department level, the military departments were required to submit detailed implementation plans consistent with the guiding principles and goals and milestones provided by the Chairman. To determine whether the services and SOCOM met these requirements, we obtained and analyzed the services’ and SOCOM’s respective implementation plans, quarterly progress reports, congressional notifications, and Navy exception to policy documents, and discussed these documents with officials from the services, SOCOM, and the Office of the Under Secretary of Defense for Personnel and Readiness (OUSD(P&R)). To determine if the services and SOCOM met all of the implementation plan requirements, we analyzed the services’ and SOCOM’s implementation plans for required components—such as timelines and timeframes for opening positions and occupations to women, milestones for development of gender-neutral occupational standards, and consistency with the guiding principles. To determine if the services and SOCOM met all of the quarterly progress report requirements, we analyzed the quarterly and bi-annual reports for required components—such as updates on assessments and progress on positions that are slated for opening or currently being evaluated, analysis of any request for an exception to policy, discussion regarding the development status of gender-neutral standards, assessments of newly opened positions, identification of any limiting factors, and recommendations for additional openings. We analyzed how some of the timeframes changed through quarterly report progress updates and interviews with service officials, by comparing them to the original ones set in the implementation plans. To determine what positions and occupations had been opened to women since January 2013, we analyzed the congressional notifications that DOD had provided to Congress from January 2013 through March 2015, and discussed this data with officials from the services and OUSD(P&R). To determine what positions and occupations remain closed to women and to determine the services’ and SOCOM’s timeframes for making decisions about whether to open these positions and occupations to women, we analyzed the services’ and SOCOM’s implementation plans, and quarterly and biannual progress reports and interviewed officials from the services, SOCOM, and OUSD(P&R). In addition, we requested and obtained data from the services and SOCOM on the total number of positions and occupations closed to women as of March 2015, as well as the total number of positions and occupations in each service and in SOCOM. We analyzed the reliability of this data by obtaining information on how the data were collected, managed, and used through interviews with and questionnaires to relevant officials and by reviewing supporting documentation. To corroborate this data, we cross-referenced it with documentation on closed positions and occupations provided by OUSD(P&R), as well as similar data provided by the services and SOCOM in their progress reports. This data was also verified by officials from OUSD(P&R), the services, and SOCOM. Although we found some discrepancies in some of the data regarding the number of closed positions reported by the services, which officials explained were due in part to changes in force structure, we determined that the data were sufficiently reliable to report on the general number and percentage of positions and occupations that are closed to women in each of the services and in SOCOM. To determine any steps that DOD and the services took to identify potential challenges and develop approaches to overcome any such challenges, we analyzed service and SOCOM implementation plans, quarterly reports, and studies and study documentation. We also interviewed officials at OUSD(P&R), Joint Staff, Defense Advisory Committee on Women in the Services, Sexual Assault Prevention and Response Office, and within each of the services and SOCOM, and discussed potential challenges they have identified and approaches to mitigating these challenges. In inquiring about challenges, we asked about challenges in general, as well as specific issues that we had identified in the services’ implementation plans, reports by the Defense Advisory Committee on Women in the Services, and prior GAO work as potential areas of study. The specific issues that we asked about were the Military Selective Service Act, women’s health, sexual harassment and assault, unit cohesion, facilities issues (e.g., berthing, privacy), promotion and retention, and equipment. To determine the extent to which service efforts to validate gender-neutral occupational standards are consistent with statutory requirements and Joint Staff guidance, we identified requirements from statutes and Joint Staff guidance and compared these requirements against service plans for studies. To identify the requirements for validating gender-neutral occupational standards, we reviewed relevant laws as well as guidance issued by the Chairman of the Joint Chiefs of Staff. Specifically, to identify statutory requirements, we reviewed the National Defense Authorization Act for Fiscal Year 1994 and the Carl Levin and Howard P. “Buck” McKeon National Defense Authorization Act for 2015. To identify Joint Staff guidance, we reviewed the Chairman’s January 2013 memorandum that laid out guiding principles for the services to follow in integrating From these laws and guidance, we identified five specific women.elements the services must follow in validating their gender-neutral occupational standards. Two elements are from statutory requirements: (1) ensure gender-neutral evaluation and (2) ensure standards reflect job tasks. Three elements are from Joint Staff guidance: (1) validate performance standards; (2) ensure eligibility reflects job tasks, and (3) integrate while preserving readiness, cohesion, and morale. To determine if the services are following these requirements and guidance, we obtained plans for studies from each of the military services and SOCOM. These plans included descriptions of scope, methodology, and timeframes for completion. We then compared these plans against the requirements we identified to determine if these planned studies met the requirements for validating gender-neutral occupational standards. Two analysts independently reviewed and assessed the plans to determine whether they contain the two statutory elements provided by the National Defense Authorization Act for Fiscal Year 1994, as amended, and the Carl Levin and Howard P. “Buck” McKeon National Defense Authorization Act for Fiscal Year 2015 and the three elements provided by the Chairman’s memorandum. The analysts then compared their results to identify any disagreements and reached agreement on all items through discussion. However, the results from these studies are not yet completed; therefore, we could not assess the extent to which the completed studies will follow the planned steps and methodologies or report how results of the studies will be implemented. We also interviewed and discussed these requirements and studies with DOD and service officials, particularly officials involved in conducting these studies. To determine the extent to which DOD is tracking, monitoring, and providing oversight over the military services’ plans to complete the integration of women in direct combat positions by January 2016, we obtained and analyzed documentation and discussed with officials from OUSD(P&R) and Joint Staff the nature and level of their tracking and monitoring, and their review of the military services’ and SOCOM’s efforts to integrate women into combat positions. Specifically, we assessed OUSD(P&R) and Joint Staff’s review of the military services’ and SOCOM’s implementation plans, and quarterly and biannual progress reports. We then compared these efforts to DOD guidance and internal control standards.with OUSD(P&R) officials a study being performed by the RAND Corporation for OUSD(P&R) as part of their oversight of the services’ and SOCOM’s efforts to validate gender-neutral occupational standards, and we met with RAND officials to discuss their work on this study. We also obtained and analyzed documentation related to the Navy’s request for an exception to policy to keep positions closed on three classes of ships, and we discussed with P&R, Joint Staff, and Navy officials the process and criteria used to review this request. 10 U.S.C. § 652. the Defense Advisory Committee on Women in the Services,Federal Advisory Committee on Gender-Integrated Training and Related Issues to identify changes in statutes and military guidance that have increased opportunities for women to serve in combat roles over the past several decades. We determined changes that have occurred in DOD’s workforce and environment over the past several decades and assessed the extent that these changes could have an effect on the utility of the Military Selective Service Act in meeting the department’s needs. We conducted this performance audit from September 2014 to July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Online surveys with additional Active and National Guard Brigade Combat Teams. Provide assessment of potential issues associated with gender integration in newly opened occupations and currently closed occupations to be opened. Study Purpose Review the integration of women into Marine Corps aviation occupations and past studies on the performance of female Marines in aviation and logistics. Determine participants’ experiences regarding gender integration in newly-integrated ground combat units particularly with respect to potential effects on readiness, morale, and unit cohesion. Determine potential impact of integrating women into Marine Corps military occupational specialties, with a particular focus on the infantry. Create a systematic and sustained injury prevention and performance enhancement training program. Examine if changing the gender component of small, elite teams would affect team dynamics in a way that would compromise the ability of the team to meet a mission objective. Assess the range of potential obstacles to effective integration of women into Special Operations Forces, focusing on the unit- and team-level. Assess how indigenous definitions of women’s roles could affect the response of local forces and communities to female Army Special Forces soldiers. Identify impacts, evaluate psychological and social considerations, and review gender neutral standards that may be impacted by opening all Army Special Operations Command occupations and positions to women. Identify impacts, evaluate psychological and social considerations, and review gender neutral standards that may be impacted by opening all Marine Corps Forces Special Operations Command occupations and positions to women. Identify impacts, evaluate psychological and social considerations, and review gender neutral standards that may be impacted by opening all Naval Special Warfare occupations and positions to women. Identify impacts, evaluate psychological and social considerations, and review gender neutral standards that may be impacted by opening all Air Force Special Operations Command occupations and positions to women. In 2012, DOD approved an exception to the Direct Ground Combat Assignment Rule Policy for the Army, and enabled the Army to assign women to enlisted and officer positions at the battalion level in open occupations in nine Brigade Combat Teams. In addition to the contact named above, Kimberly C. Seay (Assistant Director), Thomas Beall, Margaret A. Best, Renee S. Brown, Adam Hatton, Aaron D. Karty, Amie Lesser, Richard Powelson, Michael Silver, Alexander Welsh, and Michael Willems made major contributions to this report. Military Personnel: DOD Has Taken Steps to Meet the Health Needs of Deployed Servicewomen, but Actions Are Needed to Enhance Care for Sexual Assault Victims. GAO-13-182. Washington, D.C.: January 29, 2013. National Security: DOD Should Reevaluate Requirements for the Selective Service System. GAO-12-623. Washington, D.C.: June 7, 2012. Gender Issues: Trends in the Occupational Distribution of Military Women. GAO/NSIAD-99-212. Washington, D.C.: September 14, 1999. Gender Issues: Perceptions of Readiness in Selected Units. GAO/NSIAD-99-120. Washington, D.C.: May 13, 1999. Gender Issues: Information to Assess Servicemembers’ Perceptions of Gender Inequities Is Incomplete. GAO/NSIAD-99-27. Washington, D.C.: November 18, 1998. Gender Issues: Improved Guidance and Oversight Are Needed to Ensure Validity and Equity of Fitness Standards. GAO/NSIAD-99-9. Washington, D.C.: November 17, 1998. Gender Issues: Information on DOD’s Assignment Policy and Direct Ground Combat Definition. GAO/NSIAD-99-7. Washington, D.C.: October 19, 1998. Gender Issues: Changes Would Be Needed to Expand Selective Service Registration to Women. GAO/NSIAD-98-199. Washington, D.C.: June 30, 1998. Gender Issues: Analysis of Methodologies in Reports to the Secretaries of Defense and the Army. GAO/NSIAD-98-125. Washington, D.C.: March 16, 1998. Selective Service: Cost and Implications of Two Alternatives to the Present System. GAO/NSIAD-97-225. Washington, D.C.: September 10, 1997. Gender Integration in Basic Training: The Services Are Using a Variety of Approaches. GAO/T-NSIAD-97-174. Washington, D.C.: June 5, 1997. Physically Demanding Jobs: Services Have Little Data on Ability of Personnel to Perform. GAO/NSIAD-96-169. Washington, D.C.: July 9, 1996. Basic Training: Services Are Using a Variety of Approaches to Gender Integration. GAO/NSIAD-96-153. Washington, D.C.: June 10, 1996. Women in the Military: Deployment in the Persian Gulf War. GAO/NSIAD-93-93. Washington, D.C.: July 13, 1993. Women in the Military: Air Force Revises Job Availability but Entry Screening Needs Review. GAO/NSIAD-91-199. Washington, D.C.: August 30, 1991. Women in the Military: More Military Jobs Can Be Opened Under Current Statutes. GAO/NSIAD-88-222. September 7, 1988. Women in the Military: Impact of Proposed Legislation to Open More Combat Support Positions and Units to Women. GAO/NSIAD-88-197BR. Washington, D.C.: July 15, 1988. Combat Exclusion Laws for Women in the Military. GAO/T-NSIAD-88-8. Washington, D.C.: November 19, 1987.
Since September 2001 more than 300,000 women have been deployed in Iraq and Afghanistan, where more than 800 women have been wounded and more than 130 have died. A 1994 rule prohibited women from being assigned to many direct ground-combat units, but on January 24, 2013, the Secretary of Defense and the Chairman of the Joint Chiefs of Staff rescinded the rule and directed the military services to open closed positions and occupations to women by January 1, 2016. Senate Report 113-176 had a provision for GAO to review the services' progress in opening closed positions and occupations to women. This report assesses the (1) status of service efforts to open positions and occupations to women, including steps to identify and mitigate potential challenges; (2) extent the services' efforts to validate gender-neutral occupational standards are consistent with statutory and Joint Staff requirements; and (3) extent DOD is tracking, monitoring, and providing oversight of the services' integration plans. GAO analyzed statutes, DOD guidance, and service reports and plans, and interviewed DOD officials. The military services and U.S. Special Operations Command (SOCOM) have opened selected positions and occupations to women since January 2013, as shown in the table below, and are determining whether to open the remaining closed positions and occupations. The services and SOCOM also are conducting studies to identify and mitigate potential integration challenges in areas such as unit cohesion, women's health, and facilities. As of May 2015, the Secretary of the Navy was the only military department Secretary to recommend an exception to policy to keep positions closed to women on three classes of ships that are scheduled to be decommissioned, due in part to high retrofit costs. The services and SOCOM are working to address statutory and Joint Staff requirements for validating gender-neutral occupational standards. GAO identified five elements required for standards validation. GAO compared these elements to the services' and SOCOM's planned methodologies and determined that their study plans contained steps that, if carried out as planned, potentially address all five elements. However, the services' and SOCOM's efforts are still underway; therefore, GAO could not assess the extent that the studies will follow the planned methodologies or report how the study results will be implemented. The Department of Defense (DOD) has been tracking, monitoring, and providing oversight of the services' and SOCOM's integration efforts, but does not have plans to monitor the services' implementation progress after January 2016 in integrating women into newly opened positions and occupations. While DOD requires the services and SOCOM to submit quarterly progress reports, this requirement ends in January 2016. Without ongoing monitoring of integration progress, it will be difficult for DOD to help the services overcome potential obstacles. Further, when opening positions to women, DOD must analyze the implications for how it meets certain resource needs. In 2012, GAO assessed the military necessity of the Selective Service System and examined alternatives to its structure. GAO recommended in 2012 that DOD establish a process of periodically reevaluating its requirements in light of changing threats, operating environments, and strategic guidance. DOD has not taken action to do this, but agreed that a thorough assessment of the issue was merited, and should include a review of the statutes and policies surrounding the registration process and the potential to include the registration of women. GAO continues to believe that DOD should establish a process of periodically reevaluating DOD's requirements for the Selective Service System. GAO recommends that DOD develop plans to monitor integration progress after January 2016. DOD concurred with GAO's recommendation. GAO previously recommended that DOD establish a process of periodically reevaluating DOD's requirements for the Selective Service System. DOD has not taken action but GAO continues to believe the recommendation is valid.